Jan 30 21:43:17 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 21:43:17 crc restorecon[4698]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:17 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:43:18 crc restorecon[4698]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:43:18 crc restorecon[4698]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 30 21:43:19 crc kubenswrapper[4869]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 21:43:19 crc kubenswrapper[4869]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 21:43:19 crc kubenswrapper[4869]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 21:43:19 crc kubenswrapper[4869]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 21:43:19 crc kubenswrapper[4869]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 21:43:19 crc kubenswrapper[4869]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.638292 4869 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.643210 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.643228 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.643233 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.643237 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.643242 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.643249 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.643264 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.643271 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.643276 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.643350 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.643357 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.643960 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.643983 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.643989 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.643995 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644000 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644011 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644016 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644030 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644038 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644044 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644056 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644061 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644067 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644076 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644081 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644086 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644091 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644095 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644099 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644104 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644115 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644121 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644127 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644133 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644138 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644143 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644147 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644152 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644159 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644164 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644169 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644177 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644182 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644187 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644239 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644246 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644251 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644256 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644260 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644265 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644269 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644274 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644278 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644303 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644312 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644316 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644321 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644326 4869 feature_gate.go:330] unrecognized feature gate: Example Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644332 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644337 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644343 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644349 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644355 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644360 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644369 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644374 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644379 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644384 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644388 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.644393 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645634 4869 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645670 4869 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645685 4869 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645692 4869 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645699 4869 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645704 4869 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645710 4869 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645716 4869 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645724 4869 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645729 4869 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645740 4869 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645745 4869 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645750 4869 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645754 4869 flags.go:64] FLAG: --cgroup-root="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645759 4869 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645763 4869 flags.go:64] FLAG: --client-ca-file="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645767 4869 flags.go:64] FLAG: --cloud-config="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645775 4869 flags.go:64] FLAG: --cloud-provider="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645780 4869 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645791 4869 flags.go:64] FLAG: --cluster-domain="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645796 4869 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645800 4869 flags.go:64] FLAG: --config-dir="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645805 4869 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645809 4869 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645815 4869 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645846 4869 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645856 4869 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645861 4869 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645865 4869 flags.go:64] FLAG: --contention-profiling="false" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645883 4869 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645887 4869 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645911 4869 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645916 4869 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645923 4869 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645931 4869 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645936 4869 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645940 4869 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645945 4869 flags.go:64] FLAG: --enable-server="true" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645950 4869 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645961 4869 flags.go:64] FLAG: --event-burst="100" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645965 4869 flags.go:64] FLAG: --event-qps="50" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645969 4869 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645976 4869 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645980 4869 flags.go:64] FLAG: --eviction-hard="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645986 4869 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.645991 4869 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646033 4869 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646039 4869 flags.go:64] FLAG: --eviction-soft="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646043 4869 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646313 4869 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646352 4869 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646363 4869 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646374 4869 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646386 4869 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646395 4869 flags.go:64] FLAG: --feature-gates="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646410 4869 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646422 4869 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646433 4869 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646444 4869 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646455 4869 flags.go:64] FLAG: --healthz-port="10248" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646466 4869 flags.go:64] FLAG: --help="false" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646475 4869 flags.go:64] FLAG: --hostname-override="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646484 4869 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646494 4869 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646504 4869 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646513 4869 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646522 4869 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646539 4869 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646548 4869 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646558 4869 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646567 4869 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646577 4869 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646586 4869 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646595 4869 flags.go:64] FLAG: --kube-reserved="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646604 4869 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646613 4869 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646622 4869 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646631 4869 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646640 4869 flags.go:64] FLAG: --lock-file="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646649 4869 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646659 4869 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646668 4869 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646684 4869 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646694 4869 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646703 4869 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646713 4869 flags.go:64] FLAG: --logging-format="text" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646722 4869 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646732 4869 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646740 4869 flags.go:64] FLAG: --manifest-url="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646749 4869 flags.go:64] FLAG: --manifest-url-header="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646777 4869 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646786 4869 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646798 4869 flags.go:64] FLAG: --max-pods="110" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646808 4869 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646817 4869 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646826 4869 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646835 4869 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646845 4869 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646854 4869 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646864 4869 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646953 4869 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646968 4869 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646981 4869 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.646994 4869 flags.go:64] FLAG: --pod-cidr="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647004 4869 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647019 4869 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647028 4869 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647038 4869 flags.go:64] FLAG: --pods-per-core="0" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647047 4869 flags.go:64] FLAG: --port="10250" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647056 4869 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647066 4869 flags.go:64] FLAG: --provider-id="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647075 4869 flags.go:64] FLAG: --qos-reserved="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647084 4869 flags.go:64] FLAG: --read-only-port="10255" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647093 4869 flags.go:64] FLAG: --register-node="true" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647102 4869 flags.go:64] FLAG: --register-schedulable="true" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647111 4869 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647127 4869 flags.go:64] FLAG: --registry-burst="10" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647136 4869 flags.go:64] FLAG: --registry-qps="5" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647145 4869 flags.go:64] FLAG: --reserved-cpus="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647154 4869 flags.go:64] FLAG: --reserved-memory="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647165 4869 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647175 4869 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647184 4869 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647193 4869 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647202 4869 flags.go:64] FLAG: --runonce="false" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647212 4869 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647223 4869 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647232 4869 flags.go:64] FLAG: --seccomp-default="false" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647241 4869 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647252 4869 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647264 4869 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647274 4869 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647284 4869 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647293 4869 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647302 4869 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647311 4869 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647320 4869 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647330 4869 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647339 4869 flags.go:64] FLAG: --system-cgroups="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647348 4869 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647363 4869 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647372 4869 flags.go:64] FLAG: --tls-cert-file="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647413 4869 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647428 4869 flags.go:64] FLAG: --tls-min-version="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647439 4869 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647448 4869 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647457 4869 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647467 4869 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647476 4869 flags.go:64] FLAG: --v="2" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647489 4869 flags.go:64] FLAG: --version="false" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647507 4869 flags.go:64] FLAG: --vmodule="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647518 4869 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.647528 4869 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.647793 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.647807 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.647830 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.647839 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.647851 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.647862 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.647871 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.647879 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.647888 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.647935 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.647945 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.647956 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.647966 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.647976 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.647986 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.647996 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648006 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648015 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648023 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648031 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648040 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648048 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648056 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648064 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648072 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648080 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648090 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648100 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648108 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648115 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648124 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648132 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648139 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648149 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648157 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648165 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648174 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648182 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648204 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648213 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648220 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648229 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648237 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648245 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648253 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648261 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648269 4869 feature_gate.go:330] unrecognized feature gate: Example Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648281 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648295 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648303 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648312 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648321 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648331 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648341 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648349 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648357 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648366 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648374 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648382 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648390 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648398 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648406 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648414 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648422 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648430 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648438 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648446 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648455 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648464 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648472 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.648480 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.648506 4869 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.657296 4869 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.657321 4869 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657423 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657436 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657440 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657444 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657448 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657452 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657457 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657460 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657464 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657468 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657471 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657475 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657478 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657482 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657485 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657489 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657493 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657496 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657500 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657503 4869 feature_gate.go:330] unrecognized feature gate: Example Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657507 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657511 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657514 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657518 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657521 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657526 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657531 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657534 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657539 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657545 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657550 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657554 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657558 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657562 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657566 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657570 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657574 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657578 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657583 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657588 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657592 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657596 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657601 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657604 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657608 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657611 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657615 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657619 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657622 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657626 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657630 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657635 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657639 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657643 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657647 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657651 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657655 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657659 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657664 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657668 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657672 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657677 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657681 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657685 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657689 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657692 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657695 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657699 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657703 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657706 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657710 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.657716 4869 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657836 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657843 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657848 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657852 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657856 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657860 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657863 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657867 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657870 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657874 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657877 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657881 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657885 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657888 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657910 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657914 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657918 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657921 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657926 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657930 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657934 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657939 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657943 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657946 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657950 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657954 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657957 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657961 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657964 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657968 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657971 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657974 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657978 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657981 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657986 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657989 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657993 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.657998 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658002 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658007 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658011 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658016 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658020 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658023 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658027 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658030 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658034 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658038 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658042 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658047 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658051 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658055 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658059 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658063 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658066 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658069 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658073 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658076 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658080 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658084 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658087 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658091 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658094 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658098 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658102 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658105 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658109 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658113 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658117 4869 feature_gate.go:330] unrecognized feature gate: Example Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658120 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.658124 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.658130 4869 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.658279 4869 server.go:940] "Client rotation is on, will bootstrap in background" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.662384 4869 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.662462 4869 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.664140 4869 server.go:997] "Starting client certificate rotation" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.664164 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.664445 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-14 01:02:30.626191802 +0000 UTC Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.664521 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 21:43:19 crc kubenswrapper[4869]: E0130 21:43:19.696751 4869 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.696865 4869 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.698819 4869 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.711361 4869 log.go:25] "Validated CRI v1 runtime API" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.748530 4869 log.go:25] "Validated CRI v1 image API" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.750198 4869 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.754561 4869 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-30-21-39-11-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.754600 4869 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.771843 4869 manager.go:217] Machine: {Timestamp:2026-01-30 21:43:19.76914351 +0000 UTC m=+0.654901555 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:073254b5-c7c0-49f1-bed8-4438b0f03db1 BootID:eed4c80a-2486-4f50-8ae9-4ddbd620d70e Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:31:a3:7a Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:31:a3:7a Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d0:de:69 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b2:e9:d0 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:3a:85:4d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e0:6c:c2 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:72:28:2b:f7:42:75 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:da:ae:a9:41:fd:0a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.772093 4869 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.772278 4869 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.775386 4869 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.777061 4869 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.777093 4869 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.777275 4869 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.777286 4869 container_manager_linux.go:303] "Creating device plugin manager" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.777931 4869 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.777955 4869 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.778154 4869 state_mem.go:36] "Initialized new in-memory state store" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.778230 4869 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.785548 4869 kubelet.go:418] "Attempting to sync node with API server" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.785575 4869 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.785591 4869 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.785603 4869 kubelet.go:324] "Adding apiserver pod source" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.785614 4869 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.790335 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.790336 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Jan 30 21:43:19 crc kubenswrapper[4869]: E0130 21:43:19.790411 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:43:19 crc kubenswrapper[4869]: E0130 21:43:19.790439 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.790932 4869 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.792426 4869 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.794281 4869 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.797759 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.797788 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.797799 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.797809 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.797826 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.797836 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.797845 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.797859 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.797868 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.797877 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.797942 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.797955 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.803006 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.803948 4869 server.go:1280] "Started kubelet" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.804683 4869 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 21:43:19 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.805453 4869 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.806127 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.806510 4869 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.810247 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.810293 4869 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.810822 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:24:45.960613999 +0000 UTC Jan 30 21:43:19 crc kubenswrapper[4869]: E0130 21:43:19.811184 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.811457 4869 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.811748 4869 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 21:43:19 crc kubenswrapper[4869]: E0130 21:43:19.811838 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="200ms" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.811874 4869 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.813023 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Jan 30 21:43:19 crc kubenswrapper[4869]: E0130 21:43:19.813119 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.813236 4869 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.813252 4869 factory.go:55] Registering systemd factory Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.813270 4869 factory.go:221] Registration of the systemd container factory successfully Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.813737 4869 factory.go:153] Registering CRI-O factory Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.813758 4869 factory.go:221] Registration of the crio container factory successfully Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.813778 4869 factory.go:103] Registering Raw factory Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.813792 4869 manager.go:1196] Started watching for new ooms in manager Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.822426 4869 manager.go:319] Starting recovery of all containers Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.823142 4869 server.go:460] "Adding debug handlers to kubelet server" Jan 30 21:43:19 crc kubenswrapper[4869]: E0130 21:43:19.823517 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.129:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fa045b3752b81 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:43:19.803882369 +0000 UTC m=+0.689640394,LastTimestamp:2026-01-30 21:43:19.803882369 +0000 UTC m=+0.689640394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831067 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831132 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831146 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831159 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831171 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831183 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831197 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831209 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831223 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831236 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831247 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831260 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831272 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831288 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831300 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831313 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831326 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831339 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831374 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831385 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831397 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831409 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831420 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831434 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831447 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831460 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831489 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831502 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831515 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831526 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831539 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831569 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831581 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831593 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831605 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831617 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831628 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831639 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831651 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831677 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831689 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831702 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831713 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831724 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831751 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831763 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831776 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831790 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831803 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831816 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831827 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.831840 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835094 4869 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835130 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835147 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835166 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835180 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835192 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835204 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835216 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835229 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835241 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835252 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835265 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835277 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835289 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835307 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835325 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835342 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835356 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835370 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835382 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835395 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835408 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835422 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835436 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835448 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835460 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835472 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835484 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835514 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835556 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835568 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835592 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835606 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835618 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835632 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835644 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835658 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835698 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835737 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835753 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835765 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835780 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835793 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835806 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835818 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835830 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835841 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835865 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835879 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835945 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835959 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835971 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.835984 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836054 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836098 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836112 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836125 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836139 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836151 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836256 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836269 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836294 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836306 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836318 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836329 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836340 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836358 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836382 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836398 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836411 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836423 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836436 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836448 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836461 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836474 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836504 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836528 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836555 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836568 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836612 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836624 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836636 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836648 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836674 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836686 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836698 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836709 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836721 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836733 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836745 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836770 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836796 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836809 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836834 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836846 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836858 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.836869 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837002 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837016 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837044 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837055 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837066 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837077 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837137 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837149 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837183 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837195 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837221 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837231 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837255 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837266 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837278 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837289 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837301 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837312 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837336 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837347 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837383 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837395 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837409 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837428 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837456 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837472 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837522 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837539 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837577 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837594 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837608 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837620 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837631 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837644 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837709 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837730 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837794 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837807 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837863 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837875 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837920 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837932 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837958 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837970 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.837994 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.838005 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.838018 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.838055 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.838067 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.838079 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.838104 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.838115 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.838126 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.838138 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.838149 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.838160 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.838172 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.838183 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.838209 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.838221 4869 reconstruct.go:97] "Volume reconstruction finished" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.838230 4869 reconciler.go:26] "Reconciler: start to sync state" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.853265 4869 manager.go:324] Recovery completed Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.864709 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.867668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.867711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.867742 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.868736 4869 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.868753 4869 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.868771 4869 state_mem.go:36] "Initialized new in-memory state store" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.872393 4869 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.875516 4869 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.875567 4869 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.875594 4869 kubelet.go:2335] "Starting kubelet main sync loop" Jan 30 21:43:19 crc kubenswrapper[4869]: E0130 21:43:19.875645 4869 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 21:43:19 crc kubenswrapper[4869]: W0130 21:43:19.876466 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Jan 30 21:43:19 crc kubenswrapper[4869]: E0130 21:43:19.876533 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.891051 4869 policy_none.go:49] "None policy: Start" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.893013 4869 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.893300 4869 state_mem.go:35] "Initializing new in-memory state store" Jan 30 21:43:19 crc kubenswrapper[4869]: E0130 21:43:19.912603 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.935879 4869 manager.go:334] "Starting Device Plugin manager" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.935967 4869 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.935984 4869 server.go:79] "Starting device plugin registration server" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.936384 4869 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.936406 4869 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.936620 4869 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.936697 4869 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.936705 4869 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 21:43:19 crc kubenswrapper[4869]: E0130 21:43:19.942134 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.976191 4869 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.976328 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.977273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.977317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.977325 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.977437 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.978108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.978130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.978139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.978731 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.978756 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.978784 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.978845 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.978956 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.979713 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.979733 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.979741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.979759 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.979775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.979787 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.979714 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.979811 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.979820 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.979849 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.979978 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.980013 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.980686 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.980713 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.980691 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.980743 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.980753 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.980721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.980923 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.980959 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.980976 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.981599 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.981621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.981631 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.981604 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.981684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.981694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.981920 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.981957 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.982811 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.982836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:19 crc kubenswrapper[4869]: I0130 21:43:19.982844 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:20 crc kubenswrapper[4869]: E0130 21:43:20.012555 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="400ms" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.036563 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.038601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.038636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.038645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.038668 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:43:20 crc kubenswrapper[4869]: E0130 21:43:20.039136 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.129:6443: connect: connection refused" node="crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.041192 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.041258 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.041301 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.041330 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.041439 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.041507 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.041541 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.043231 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.043310 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.043393 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.043466 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.043500 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.043515 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.043566 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.043607 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145499 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145545 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145562 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145582 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145598 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145612 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145628 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145644 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145658 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145671 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145684 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145676 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145714 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145724 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145698 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145734 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145782 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145786 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145764 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145787 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145688 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145761 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145807 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145821 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145811 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145845 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145858 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145937 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.145986 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.146012 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.239328 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.240725 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.240761 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.240824 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.240851 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:43:20 crc kubenswrapper[4869]: E0130 21:43:20.241336 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.129:6443: connect: connection refused" node="crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.317449 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.322768 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.339935 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.348243 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: W0130 21:43:20.361755 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1eee06425751d27306a9b90c156a22fc277a55ed21f2a25467f9703c199318ca WatchSource:0}: Error finding container 1eee06425751d27306a9b90c156a22fc277a55ed21f2a25467f9703c199318ca: Status 404 returned error can't find the container with id 1eee06425751d27306a9b90c156a22fc277a55ed21f2a25467f9703c199318ca Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.365499 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 21:43:20 crc kubenswrapper[4869]: W0130 21:43:20.371350 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-131177dc48d43e6a942c0285fca20a59c8ea195cf5940ca5d49ac4476487da08 WatchSource:0}: Error finding container 131177dc48d43e6a942c0285fca20a59c8ea195cf5940ca5d49ac4476487da08: Status 404 returned error can't find the container with id 131177dc48d43e6a942c0285fca20a59c8ea195cf5940ca5d49ac4476487da08 Jan 30 21:43:20 crc kubenswrapper[4869]: W0130 21:43:20.373597 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7a9e24a772a6a368a455e1f1a58cae35076ffef52908df2df3421fc7763c2a0e WatchSource:0}: Error finding container 7a9e24a772a6a368a455e1f1a58cae35076ffef52908df2df3421fc7763c2a0e: Status 404 returned error can't find the container with id 7a9e24a772a6a368a455e1f1a58cae35076ffef52908df2df3421fc7763c2a0e Jan 30 21:43:20 crc kubenswrapper[4869]: W0130 21:43:20.388632 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-ac1409c19d062ba557328b7e98d20777c24606a3e08fbef078a297cbef41440d WatchSource:0}: Error finding container ac1409c19d062ba557328b7e98d20777c24606a3e08fbef078a297cbef41440d: Status 404 returned error can't find the container with id ac1409c19d062ba557328b7e98d20777c24606a3e08fbef078a297cbef41440d Jan 30 21:43:20 crc kubenswrapper[4869]: E0130 21:43:20.414731 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="800ms" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.642317 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.643888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.643952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.643965 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.643996 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:43:20 crc kubenswrapper[4869]: E0130 21:43:20.644524 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.129:6443: connect: connection refused" node="crc" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.810997 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 15:57:53.547206957 +0000 UTC Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.812665 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Jan 30 21:43:20 crc kubenswrapper[4869]: W0130 21:43:20.861650 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Jan 30 21:43:20 crc kubenswrapper[4869]: E0130 21:43:20.861722 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.880355 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"131177dc48d43e6a942c0285fca20a59c8ea195cf5940ca5d49ac4476487da08"} Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.881227 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fe39fdc8359ecee50b970b19651289e563596701704d03a8c7a77bdf543d8aae"} Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.881995 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1eee06425751d27306a9b90c156a22fc277a55ed21f2a25467f9703c199318ca"} Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.882743 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ac1409c19d062ba557328b7e98d20777c24606a3e08fbef078a297cbef41440d"} Jan 30 21:43:20 crc kubenswrapper[4869]: I0130 21:43:20.883441 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7a9e24a772a6a368a455e1f1a58cae35076ffef52908df2df3421fc7763c2a0e"} Jan 30 21:43:20 crc kubenswrapper[4869]: W0130 21:43:20.892084 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Jan 30 21:43:20 crc kubenswrapper[4869]: E0130 21:43:20.892140 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:43:20 crc kubenswrapper[4869]: W0130 21:43:20.932523 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Jan 30 21:43:20 crc kubenswrapper[4869]: E0130 21:43:20.932613 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:43:21 crc kubenswrapper[4869]: W0130 21:43:21.102598 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Jan 30 21:43:21 crc kubenswrapper[4869]: E0130 21:43:21.102679 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:43:21 crc kubenswrapper[4869]: E0130 21:43:21.215180 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="1.6s" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.445199 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.447473 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.447507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.447517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.447539 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:43:21 crc kubenswrapper[4869]: E0130 21:43:21.447930 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.129:6443: connect: connection refused" node="crc" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.811334 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:59:37.075426517 +0000 UTC Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.813001 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.885758 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.886802 4869 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e817aca3610a40823460973541a6bb549f4d88d55919c93640ae8c6dc0874946" exitCode=0 Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.886879 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e817aca3610a40823460973541a6bb549f4d88d55919c93640ae8c6dc0874946"} Jan 30 21:43:21 crc kubenswrapper[4869]: E0130 21:43:21.887152 4869 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.886887 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.887785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.887810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.887865 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.888473 4869 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011" exitCode=0 Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.888525 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011"} Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.888604 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.889843 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.889879 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.889888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.890558 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278" exitCode=0 Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.890585 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278"} Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.890623 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.891190 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.891214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.891226 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.892178 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a"} Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.892203 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b"} Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.892347 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.893189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.893210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.893218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.893804 4869 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0f10e499f68f08e0805f64d61c975daaf1f15902a31b0b1b8a53259eaff38d6f" exitCode=0 Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.893831 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0f10e499f68f08e0805f64d61c975daaf1f15902a31b0b1b8a53259eaff38d6f"} Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.893928 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.894906 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.894937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:21 crc kubenswrapper[4869]: I0130 21:43:21.894946 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.811862 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:05:42.025826096 +0000 UTC Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.812969 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Jan 30 21:43:22 crc kubenswrapper[4869]: E0130 21:43:22.816779 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="3.2s" Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.899275 4869 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="36bc8af5ade6fd4df68a2c34f975373cabf52a0b157626deb5435ca6774d7fa2" exitCode=0 Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.899374 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"36bc8af5ade6fd4df68a2c34f975373cabf52a0b157626deb5435ca6774d7fa2"} Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.899549 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.900883 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.900934 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.900990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.904869 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f7301698629901946ea3fa72f6c03cac1a88d253d9e14f4a8c225c8ff390fc0d"} Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.905110 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.907826 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.907874 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.907914 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.910595 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"568c346025fcd6f3b511f60206c6836f445c3e1be15135c064fed0507d20aa22"} Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.910631 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"68975f2095b7dcd1eaf59dd358c4ac1b731116418a8991290d4ee2d66611f2d0"} Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.910650 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"754edbe89075ed676689e09cc4aceaa7e45560a1db966e12c22a96be3bd3eba9"} Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.910684 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.912504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.912532 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.912543 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.916044 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5"} Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.916095 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06"} Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.916117 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce"} Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.916136 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236"} Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.919983 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf"} Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.920029 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192"} Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.920291 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.922039 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.922102 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:22 crc kubenswrapper[4869]: I0130 21:43:22.922117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.048647 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.050214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.050246 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.050256 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.050275 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:43:23 crc kubenswrapper[4869]: E0130 21:43:23.050633 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.129:6443: connect: connection refused" node="crc" Jan 30 21:43:23 crc kubenswrapper[4869]: W0130 21:43:23.165674 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Jan 30 21:43:23 crc kubenswrapper[4869]: E0130 21:43:23.165768 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:43:23 crc kubenswrapper[4869]: W0130 21:43:23.248593 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Jan 30 21:43:23 crc kubenswrapper[4869]: E0130 21:43:23.248721 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:43:23 crc kubenswrapper[4869]: W0130 21:43:23.339048 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Jan 30 21:43:23 crc kubenswrapper[4869]: E0130 21:43:23.339178 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:43:23 crc kubenswrapper[4869]: W0130 21:43:23.735104 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Jan 30 21:43:23 crc kubenswrapper[4869]: E0130 21:43:23.735623 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.812691 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:42:17.13358018 +0000 UTC Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.814308 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Jan 30 21:43:23 crc kubenswrapper[4869]: E0130 21:43:23.916942 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.129:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fa045b3752b81 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:43:19.803882369 +0000 UTC m=+0.689640394,LastTimestamp:2026-01-30 21:43:19.803882369 +0000 UTC m=+0.689640394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.928446 4869 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="dda41f162f658a9344c204bf1da3bc99b686acd1403a2329b3813fb037aea0a7" exitCode=0 Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.928520 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"dda41f162f658a9344c204bf1da3bc99b686acd1403a2329b3813fb037aea0a7"} Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.928629 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.929361 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.929388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.929398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.929992 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.931639 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7f412eecc65b9e4942b9685b797d91b17d8dae998255d25c5f793a997fbd3357" exitCode=255 Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.931658 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7f412eecc65b9e4942b9685b797d91b17d8dae998255d25c5f793a997fbd3357"} Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.931736 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.931750 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.931851 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.931739 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.931961 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.932874 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.932918 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.932920 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.932937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.932947 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.932953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.932977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.933014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.933026 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.933500 4869 scope.go:117] "RemoveContainer" containerID="7f412eecc65b9e4942b9685b797d91b17d8dae998255d25c5f793a997fbd3357" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.934581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.934781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:23 crc kubenswrapper[4869]: I0130 21:43:23.935014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.812806 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 03:20:27.064846131 +0000 UTC Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.934932 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.936587 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3"} Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.936671 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.936738 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.937436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.937556 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.937573 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.940109 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2817590a40ce5b7301b25e6abfa3d27d8da6d9ee288bd6b4f8e6e39330f868c4"} Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.940253 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0604556c3c81417d9bdad29426c24834744322c27776a24d892c935967b5596b"} Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.940350 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2a8f73fa75768fa82a917a5255b818c627d9962056307b795934520dba257c5d"} Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.940584 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5582c2fd3cfa3c8f47f25270bf2a4a188ace44c833f6c539488e400f2a5246fe"} Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.940671 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6eded6ec401030172f3343f4ef21d28d2f96bc2da3032e1a9d3978f1b6f7d267"} Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.940147 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.940174 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.941816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.941843 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.941854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.941927 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.941951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.941959 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:24 crc kubenswrapper[4869]: I0130 21:43:24.987502 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:25 crc kubenswrapper[4869]: I0130 21:43:25.813295 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 03:53:49.485599922 +0000 UTC Jan 30 21:43:25 crc kubenswrapper[4869]: I0130 21:43:25.942455 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:25 crc kubenswrapper[4869]: I0130 21:43:25.942557 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:25 crc kubenswrapper[4869]: I0130 21:43:25.942464 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:25 crc kubenswrapper[4869]: I0130 21:43:25.948527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:25 crc kubenswrapper[4869]: I0130 21:43:25.948531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:25 crc kubenswrapper[4869]: I0130 21:43:25.948602 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:25 crc kubenswrapper[4869]: I0130 21:43:25.948614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:25 crc kubenswrapper[4869]: I0130 21:43:25.948573 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:25 crc kubenswrapper[4869]: I0130 21:43:25.948670 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:26 crc kubenswrapper[4869]: I0130 21:43:26.166387 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 21:43:26 crc kubenswrapper[4869]: I0130 21:43:26.251299 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:26 crc kubenswrapper[4869]: I0130 21:43:26.252722 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:26 crc kubenswrapper[4869]: I0130 21:43:26.252786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:26 crc kubenswrapper[4869]: I0130 21:43:26.252807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:26 crc kubenswrapper[4869]: I0130 21:43:26.252842 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:43:26 crc kubenswrapper[4869]: I0130 21:43:26.814369 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 05:35:00.0249294 +0000 UTC Jan 30 21:43:26 crc kubenswrapper[4869]: I0130 21:43:26.943974 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:26 crc kubenswrapper[4869]: I0130 21:43:26.944773 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:26 crc kubenswrapper[4869]: I0130 21:43:26.944800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:26 crc kubenswrapper[4869]: I0130 21:43:26.944809 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:27 crc kubenswrapper[4869]: I0130 21:43:27.124432 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 21:43:27 crc kubenswrapper[4869]: I0130 21:43:27.124620 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:27 crc kubenswrapper[4869]: I0130 21:43:27.125744 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:27 crc kubenswrapper[4869]: I0130 21:43:27.125782 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:27 crc kubenswrapper[4869]: I0130 21:43:27.125795 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:27 crc kubenswrapper[4869]: I0130 21:43:27.815426 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 06:19:48.735472506 +0000 UTC Jan 30 21:43:28 crc kubenswrapper[4869]: I0130 21:43:28.215283 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:28 crc kubenswrapper[4869]: I0130 21:43:28.215422 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:28 crc kubenswrapper[4869]: I0130 21:43:28.216336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:28 crc kubenswrapper[4869]: I0130 21:43:28.216360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:28 crc kubenswrapper[4869]: I0130 21:43:28.216369 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:28 crc kubenswrapper[4869]: I0130 21:43:28.816193 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 17:32:14.062185165 +0000 UTC Jan 30 21:43:28 crc kubenswrapper[4869]: I0130 21:43:28.906969 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 30 21:43:28 crc kubenswrapper[4869]: I0130 21:43:28.907204 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:28 crc kubenswrapper[4869]: I0130 21:43:28.908608 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:28 crc kubenswrapper[4869]: I0130 21:43:28.908646 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:28 crc kubenswrapper[4869]: I0130 21:43:28.908658 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:29 crc kubenswrapper[4869]: I0130 21:43:29.796514 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:43:29 crc kubenswrapper[4869]: I0130 21:43:29.796743 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:29 crc kubenswrapper[4869]: I0130 21:43:29.798292 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:29 crc kubenswrapper[4869]: I0130 21:43:29.798345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:29 crc kubenswrapper[4869]: I0130 21:43:29.798356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:29 crc kubenswrapper[4869]: I0130 21:43:29.802483 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:43:29 crc kubenswrapper[4869]: I0130 21:43:29.816408 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 05:27:57.266397058 +0000 UTC Jan 30 21:43:29 crc kubenswrapper[4869]: I0130 21:43:29.848984 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:43:29 crc kubenswrapper[4869]: E0130 21:43:29.942332 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 21:43:29 crc kubenswrapper[4869]: I0130 21:43:29.950401 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:29 crc kubenswrapper[4869]: I0130 21:43:29.950665 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:43:29 crc kubenswrapper[4869]: I0130 21:43:29.951609 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:29 crc kubenswrapper[4869]: I0130 21:43:29.951682 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:29 crc kubenswrapper[4869]: I0130 21:43:29.951703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:30 crc kubenswrapper[4869]: I0130 21:43:30.817006 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 12:11:00.842448815 +0000 UTC Jan 30 21:43:30 crc kubenswrapper[4869]: I0130 21:43:30.952454 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:30 crc kubenswrapper[4869]: I0130 21:43:30.953701 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:30 crc kubenswrapper[4869]: I0130 21:43:30.953766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:30 crc kubenswrapper[4869]: I0130 21:43:30.953784 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:31 crc kubenswrapper[4869]: I0130 21:43:31.817490 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 14:35:48.430298494 +0000 UTC Jan 30 21:43:32 crc kubenswrapper[4869]: I0130 21:43:32.318186 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:43:32 crc kubenswrapper[4869]: I0130 21:43:32.318455 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:32 crc kubenswrapper[4869]: I0130 21:43:32.320626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:32 crc kubenswrapper[4869]: I0130 21:43:32.320710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:32 crc kubenswrapper[4869]: I0130 21:43:32.320722 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:32 crc kubenswrapper[4869]: I0130 21:43:32.336846 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:43:32 crc kubenswrapper[4869]: I0130 21:43:32.818480 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 12:06:55.883893738 +0000 UTC Jan 30 21:43:32 crc kubenswrapper[4869]: I0130 21:43:32.850010 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 21:43:32 crc kubenswrapper[4869]: I0130 21:43:32.850084 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 21:43:32 crc kubenswrapper[4869]: I0130 21:43:32.957065 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:32 crc kubenswrapper[4869]: I0130 21:43:32.958115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:32 crc kubenswrapper[4869]: I0130 21:43:32.958304 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:32 crc kubenswrapper[4869]: I0130 21:43:32.958436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:33 crc kubenswrapper[4869]: I0130 21:43:33.820055 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:01:41.922046129 +0000 UTC Jan 30 21:43:34 crc kubenswrapper[4869]: I0130 21:43:34.181008 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 21:43:34 crc kubenswrapper[4869]: I0130 21:43:34.181073 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 21:43:34 crc kubenswrapper[4869]: I0130 21:43:34.187160 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 21:43:34 crc kubenswrapper[4869]: I0130 21:43:34.187224 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 21:43:34 crc kubenswrapper[4869]: I0130 21:43:34.821029 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 07:31:53.961891707 +0000 UTC Jan 30 21:43:35 crc kubenswrapper[4869]: I0130 21:43:35.821953 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 22:52:47.52494151 +0000 UTC Jan 30 21:43:36 crc kubenswrapper[4869]: I0130 21:43:36.823035 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 20:32:27.731169853 +0000 UTC Jan 30 21:43:37 crc kubenswrapper[4869]: I0130 21:43:37.156663 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 21:43:37 crc kubenswrapper[4869]: I0130 21:43:37.156867 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:37 crc kubenswrapper[4869]: I0130 21:43:37.158398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:37 crc kubenswrapper[4869]: I0130 21:43:37.158465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:37 crc kubenswrapper[4869]: I0130 21:43:37.158491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:37 crc kubenswrapper[4869]: I0130 21:43:37.176450 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 21:43:37 crc kubenswrapper[4869]: I0130 21:43:37.824973 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 13:11:45.294611754 +0000 UTC Jan 30 21:43:37 crc kubenswrapper[4869]: I0130 21:43:37.919945 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 21:43:37 crc kubenswrapper[4869]: I0130 21:43:37.920292 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 21:43:37 crc kubenswrapper[4869]: I0130 21:43:37.968358 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:37 crc kubenswrapper[4869]: I0130 21:43:37.969608 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:37 crc kubenswrapper[4869]: I0130 21:43:37.969756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:37 crc kubenswrapper[4869]: I0130 21:43:37.969858 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:38 crc kubenswrapper[4869]: I0130 21:43:38.221029 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:38 crc kubenswrapper[4869]: I0130 21:43:38.221249 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:38 crc kubenswrapper[4869]: I0130 21:43:38.222190 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 21:43:38 crc kubenswrapper[4869]: I0130 21:43:38.222242 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 21:43:38 crc kubenswrapper[4869]: I0130 21:43:38.223150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:38 crc kubenswrapper[4869]: I0130 21:43:38.223219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:38 crc kubenswrapper[4869]: I0130 21:43:38.223238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:38 crc kubenswrapper[4869]: I0130 21:43:38.228780 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:38 crc kubenswrapper[4869]: I0130 21:43:38.825943 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:11:54.06096238 +0000 UTC Jan 30 21:43:38 crc kubenswrapper[4869]: I0130 21:43:38.970653 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:38 crc kubenswrapper[4869]: I0130 21:43:38.971220 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 21:43:38 crc kubenswrapper[4869]: I0130 21:43:38.971317 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 21:43:38 crc kubenswrapper[4869]: I0130 21:43:38.971877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:38 crc kubenswrapper[4869]: I0130 21:43:38.971931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:38 crc kubenswrapper[4869]: I0130 21:43:38.971944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.181622 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.184810 4869 trace.go:236] Trace[753834631]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 21:43:27.681) (total time: 11502ms): Jan 30 21:43:39 crc kubenswrapper[4869]: Trace[753834631]: ---"Objects listed" error: 11502ms (21:43:39.184) Jan 30 21:43:39 crc kubenswrapper[4869]: Trace[753834631]: [11.50297441s] [11.50297441s] END Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.184864 4869 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.188155 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.188547 4869 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.188781 4869 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.189536 4869 trace.go:236] Trace[485824119]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 21:43:28.568) (total time: 10620ms): Jan 30 21:43:39 crc kubenswrapper[4869]: Trace[485824119]: ---"Objects listed" error: 10620ms (21:43:39.189) Jan 30 21:43:39 crc kubenswrapper[4869]: Trace[485824119]: [10.620726436s] [10.620726436s] END Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.189588 4869 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.189592 4869 trace.go:236] Trace[715169699]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 21:43:29.013) (total time: 10175ms): Jan 30 21:43:39 crc kubenswrapper[4869]: Trace[715169699]: ---"Objects listed" error: 10175ms (21:43:39.189) Jan 30 21:43:39 crc kubenswrapper[4869]: Trace[715169699]: [10.175767899s] [10.175767899s] END Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.189632 4869 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.195007 4869 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.245962 4869 csr.go:261] certificate signing request csr-fpkmj is approved, waiting to be issued Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.280455 4869 csr.go:257] certificate signing request csr-fpkmj is issued Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.663544 4869 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 21:43:39 crc kubenswrapper[4869]: W0130 21:43:39.663718 4869 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 21:43:39 crc kubenswrapper[4869]: W0130 21:43:39.663760 4869 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 21:43:39 crc kubenswrapper[4869]: W0130 21:43:39.663766 4869 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 21:43:39 crc kubenswrapper[4869]: W0130 21:43:39.663773 4869 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.663832 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.129:53418->38.102.83.129:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188fa045d594116b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:43:20.376332651 +0000 UTC m=+1.262090686,LastTimestamp:2026-01-30 21:43:20.376332651 +0000 UTC m=+1.262090686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.798314 4869 apiserver.go:52] "Watching apiserver" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.804928 4869 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.805347 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.805717 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.805773 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.805790 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.806071 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.806111 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.806181 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.806249 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.806185 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.806298 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.807628 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.807981 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.808008 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.807983 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.808291 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.808648 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.808945 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.809743 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.811412 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.813580 4869 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.826482 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 14:19:41.414290808 +0000 UTC Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.836218 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.850093 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.852654 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.858270 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.862193 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.864301 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.872048 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.880855 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.889242 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.892454 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.892618 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.892691 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.892774 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.892918 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.892994 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.893061 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.893131 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.893197 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.893264 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.893356 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.892689 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.893044 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.893436 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.893528 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.893217 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.893215 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.893334 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.893552 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.893708 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.893807 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.893885 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.893978 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894060 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894137 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.893721 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894001 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894169 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894305 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894376 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894444 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894513 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894630 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894694 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894753 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894818 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894885 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894971 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895034 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895098 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895183 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895269 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895336 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895399 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895460 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895529 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895598 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895664 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895749 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895815 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895880 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895988 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896059 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896122 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896190 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896260 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896324 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896391 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896458 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896517 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896580 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896644 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896726 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896793 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896856 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896932 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897008 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897089 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897152 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897215 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897277 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897337 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897405 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897466 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897532 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897595 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897654 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897721 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897808 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897880 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897968 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.898034 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.898097 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.898158 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.898217 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.898287 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.898353 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.898421 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.898487 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.898566 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.898662 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.898731 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.898800 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.898892 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.898975 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.899037 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.899108 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.899175 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.899236 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.899305 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.899384 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.899450 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.899517 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.899577 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.899643 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.899703 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.899763 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.899830 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.899892 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.899988 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.900053 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.900134 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.900227 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.900317 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.900409 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.900498 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.900586 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.900682 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.900842 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.900965 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.901068 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.901191 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.901299 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.901383 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.901464 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.901549 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.901632 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.901720 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.901836 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.901983 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.902620 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.902660 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.902695 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.902777 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.902812 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.902837 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.902867 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.902890 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.902939 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.902962 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903092 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903118 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903145 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903167 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903337 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903364 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903387 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903412 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903438 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903460 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903485 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903514 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903539 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903564 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903593 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903616 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903639 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903667 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903808 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.903841 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904320 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904348 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904364 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904382 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904411 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904435 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904469 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904486 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904504 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904520 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904536 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904553 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904572 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904603 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904629 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904660 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904821 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904843 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904859 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904920 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904938 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904955 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904971 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.904987 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.905104 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.905151 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.905172 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.905188 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.905204 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.905242 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.905261 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.905319 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.905574 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.905595 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.905626 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.905643 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.905660 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.905714 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.905732 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.905750 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.905767 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.905834 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906055 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906077 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906094 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906123 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906183 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906205 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906378 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906421 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906460 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906533 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906552 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906570 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906755 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906777 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906811 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906850 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906932 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906956 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.907082 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.907101 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.907116 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.907130 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.907144 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.907166 4869 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.907176 4869 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.907190 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.907483 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.907496 4869 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.907505 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.909988 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894337 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.928989 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.929055 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894823 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894945 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894979 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895149 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895178 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895395 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895597 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895629 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895862 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.895648 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896043 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896320 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.894753 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896440 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.929310 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896521 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896700 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896836 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897161 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897175 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897176 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897199 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897312 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897482 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897494 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897550 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.897583 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.898884 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.898941 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.899361 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906325 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906759 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.906823 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.907823 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.908691 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.908872 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.908960 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.909475 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.909657 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.909813 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.909953 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.910024 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.910212 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.910232 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.910400 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.910828 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.910562 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.911457 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.911550 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.911732 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.911973 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.912989 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.913306 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:43:40.413280313 +0000 UTC m=+21.299038338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.929555 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.914444 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.914496 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.914703 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.914715 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.914976 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.915003 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.914937 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.915345 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.915435 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.915659 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.915833 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.916208 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.916287 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.916368 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.916586 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.916841 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.917252 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.917284 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.920369 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.920524 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.920739 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.921328 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.921379 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.921403 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.921932 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.922138 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.929764 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.922266 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.922391 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.922721 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.922935 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.923260 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.923443 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.923574 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.923697 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.923832 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.923976 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.924108 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.924234 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.924371 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.924382 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.924599 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.924732 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.924999 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.925264 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.925676 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.925858 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.925820 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.926048 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.926113 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.926137 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.926308 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.926337 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.926355 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.926547 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.926565 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.927121 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.927697 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.927880 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.928051 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.928173 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.928309 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.928367 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.928507 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.928714 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.928763 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.928788 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.928807 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.930377 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.929205 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.930448 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.930569 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.930616 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.930623 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.930814 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.930844 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.930862 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.931016 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.931141 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.931168 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.931209 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.931408 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.931456 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.931788 4869 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.931465 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:40.431446207 +0000 UTC m=+21.317204232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.931709 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.931916 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.931825 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.932064 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.932105 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.932250 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.932295 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:40.432278713 +0000 UTC m=+21.318036738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.932553 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.932639 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.932781 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.932793 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.932995 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.896407 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.933135 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.933344 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.933744 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.933929 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.934216 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.934225 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.934256 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.934448 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.937659 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.938497 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.938577 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.938873 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.943125 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.945196 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.946513 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.947362 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.947392 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.947410 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.947498 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:40.447467927 +0000 UTC m=+21.333225962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.947764 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.948433 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.948480 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.948496 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.948620 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:40.448598262 +0000 UTC m=+21.334356277 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.949139 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.951855 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.951870 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.952086 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.952263 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.952306 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.952602 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.952622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.953022 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.952648 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.952499 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.952822 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.952840 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.953143 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.953738 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.955828 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.956001 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.956017 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.956076 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.956421 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.957502 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.957671 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.958253 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.960364 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.961243 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.962906 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.969117 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.971762 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.974233 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.975120 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.976748 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3" exitCode=255 Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.977742 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3"} Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.977796 4869 scope.go:117] "RemoveContainer" containerID="7f412eecc65b9e4942b9685b797d91b17d8dae998255d25c5f793a997fbd3357" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.979382 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.984307 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.987984 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.988681 4869 scope.go:117] "RemoveContainer" containerID="75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3" Jan 30 21:43:39 crc kubenswrapper[4869]: E0130 21:43:39.988843 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 21:43:39 crc kubenswrapper[4869]: I0130 21:43:39.990503 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.004318 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008218 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008256 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008310 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008320 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008329 4869 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008337 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008346 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008354 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008351 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008362 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008413 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008423 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008432 4869 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008441 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008450 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008460 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008391 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008469 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008498 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008507 4869 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008515 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008523 4869 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008531 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008539 4869 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008547 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008556 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008564 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008572 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008580 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008589 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008597 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008605 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008613 4869 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008621 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008628 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008637 4869 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008645 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008653 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008661 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008669 4869 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008677 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008685 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008694 4869 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008702 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008710 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008717 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008726 4869 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008734 4869 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008741 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008751 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008759 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008767 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008774 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008782 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008789 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008797 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008805 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008812 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008819 4869 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008827 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008837 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008845 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008852 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008860 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008868 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008875 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008883 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008902 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008914 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008925 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008935 4869 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008945 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008957 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008969 4869 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008981 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.008992 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009002 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009012 4869 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009022 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009035 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009065 4869 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009078 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009090 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009100 4869 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009111 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009121 4869 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009132 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009142 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009152 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009162 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009172 4869 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009183 4869 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009194 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009204 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009216 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009241 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009252 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009263 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009273 4869 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009284 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009294 4869 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009302 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009310 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009318 4869 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009326 4869 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009334 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009342 4869 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009349 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009357 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009364 4869 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009372 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009381 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009389 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009397 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009404 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009412 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009420 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009428 4869 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009436 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009444 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009452 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009460 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009467 4869 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009474 4869 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009482 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009490 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009498 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009514 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009521 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009529 4869 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009540 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009548 4869 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009572 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009580 4869 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009588 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009598 4869 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009606 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009615 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009623 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009631 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009639 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009647 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009655 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009663 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009671 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009679 4869 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009687 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009695 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009704 4869 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009712 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009720 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009728 4869 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009737 4869 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009745 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009753 4869 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009761 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009769 4869 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009777 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009785 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009793 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009800 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009839 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009849 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009885 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009918 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009926 4869 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009934 4869 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009943 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009952 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009959 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009968 4869 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009975 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009983 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009991 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.009999 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.010007 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.010015 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.010024 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.010031 4869 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.010039 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.010047 4869 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.010055 4869 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.010062 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.010071 4869 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.010079 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.010087 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.010096 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.018249 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.027748 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.041456 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f412eecc65b9e4942b9685b797d91b17d8dae998255d25c5f793a997fbd3357\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:23Z\\\",\\\"message\\\":\\\"W0130 21:43:23.273923 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:43:23.274397 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809403 cert, and key in /tmp/serving-cert-2802550774/serving-signer.crt, /tmp/serving-cert-2802550774/serving-signer.key\\\\nI0130 21:43:23.507212 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:43:23.513944 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:43:23.514152 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:23.515740 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2802550774/tls.crt::/tmp/serving-cert-2802550774/tls.key\\\\\\\"\\\\nF0130 21:43:23.817485 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.053294 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.075776 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.093135 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.111079 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.115378 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-v9n4p"] Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.115684 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-v9n4p" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.120073 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.121298 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.130090 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.133664 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:43:40 crc kubenswrapper[4869]: W0130 21:43:40.139072 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-3857627caafabbf86adba004b927eb0945c6d68f37d5d96e5758c6a8e901dfa9 WatchSource:0}: Error finding container 3857627caafabbf86adba004b927eb0945c6d68f37d5d96e5758c6a8e901dfa9: Status 404 returned error can't find the container with id 3857627caafabbf86adba004b927eb0945c6d68f37d5d96e5758c6a8e901dfa9 Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.139332 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.139525 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.139657 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.170975 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.189259 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.205398 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.211558 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk245\" (UniqueName: \"kubernetes.io/projected/e94ca98e-63b4-4337-af06-7525b62333b3-kube-api-access-rk245\") pod \"node-resolver-v9n4p\" (UID: \"e94ca98e-63b4-4337-af06-7525b62333b3\") " pod="openshift-dns/node-resolver-v9n4p" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.211599 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e94ca98e-63b4-4337-af06-7525b62333b3-hosts-file\") pod \"node-resolver-v9n4p\" (UID: \"e94ca98e-63b4-4337-af06-7525b62333b3\") " pod="openshift-dns/node-resolver-v9n4p" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.228773 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.243039 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.252692 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.260760 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.270715 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.280670 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.281587 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 21:38:39 +0000 UTC, rotation deadline is 2026-12-14 14:20:35.080632886 +0000 UTC Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.281640 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7624h36m54.798996052s for next certificate rotation Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.287687 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.305449 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f412eecc65b9e4942b9685b797d91b17d8dae998255d25c5f793a997fbd3357\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:23Z\\\",\\\"message\\\":\\\"W0130 21:43:23.273923 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:43:23.274397 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809403 cert, and key in /tmp/serving-cert-2802550774/serving-signer.crt, /tmp/serving-cert-2802550774/serving-signer.key\\\\nI0130 21:43:23.507212 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:43:23.513944 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:43:23.514152 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:23.515740 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2802550774/tls.crt::/tmp/serving-cert-2802550774/tls.key\\\\\\\"\\\\nF0130 21:43:23.817485 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.312927 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk245\" (UniqueName: \"kubernetes.io/projected/e94ca98e-63b4-4337-af06-7525b62333b3-kube-api-access-rk245\") pod \"node-resolver-v9n4p\" (UID: \"e94ca98e-63b4-4337-af06-7525b62333b3\") " pod="openshift-dns/node-resolver-v9n4p" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.312973 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e94ca98e-63b4-4337-af06-7525b62333b3-hosts-file\") pod \"node-resolver-v9n4p\" (UID: \"e94ca98e-63b4-4337-af06-7525b62333b3\") " pod="openshift-dns/node-resolver-v9n4p" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.313048 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e94ca98e-63b4-4337-af06-7525b62333b3-hosts-file\") pod \"node-resolver-v9n4p\" (UID: \"e94ca98e-63b4-4337-af06-7525b62333b3\") " pod="openshift-dns/node-resolver-v9n4p" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.328120 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk245\" (UniqueName: \"kubernetes.io/projected/e94ca98e-63b4-4337-af06-7525b62333b3-kube-api-access-rk245\") pod \"node-resolver-v9n4p\" (UID: \"e94ca98e-63b4-4337-af06-7525b62333b3\") " pod="openshift-dns/node-resolver-v9n4p" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.414144 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.414417 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:43:41.414378563 +0000 UTC m=+22.300136588 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.426345 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-v9n4p" Jan 30 21:43:40 crc kubenswrapper[4869]: W0130 21:43:40.435822 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode94ca98e_63b4_4337_af06_7525b62333b3.slice/crio-5d502e8c8697c0a167b80e8d79de3480681b4f746e379f697a695e530c6aca00 WatchSource:0}: Error finding container 5d502e8c8697c0a167b80e8d79de3480681b4f746e379f697a695e530c6aca00: Status 404 returned error can't find the container with id 5d502e8c8697c0a167b80e8d79de3480681b4f746e379f697a695e530c6aca00 Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.499623 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-vzgdv"] Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.500034 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.500151 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-tz8jn"] Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.500313 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.502446 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.502958 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.503345 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.503375 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.503874 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.504160 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.504389 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.505826 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.505861 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.505908 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.515171 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.515208 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.515230 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.515249 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.515336 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.515381 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:41.51536639 +0000 UTC m=+22.401124415 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.515416 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.515436 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:41.515429592 +0000 UTC m=+22.401187617 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.515493 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.515505 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.515515 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.515538 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:41.515531095 +0000 UTC m=+22.401289120 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.515579 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.515588 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.515595 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.515617 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:41.515609537 +0000 UTC m=+22.401367562 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.515599 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.527003 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.535152 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.542995 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.554177 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f412eecc65b9e4942b9685b797d91b17d8dae998255d25c5f793a997fbd3357\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:23Z\\\",\\\"message\\\":\\\"W0130 21:43:23.273923 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:43:23.274397 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809403 cert, and key in /tmp/serving-cert-2802550774/serving-signer.crt, /tmp/serving-cert-2802550774/serving-signer.key\\\\nI0130 21:43:23.507212 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:43:23.513944 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:43:23.514152 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:23.515740 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2802550774/tls.crt::/tmp/serving-cert-2802550774/tls.key\\\\\\\"\\\\nF0130 21:43:23.817485 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.567411 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.575549 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.582999 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.594063 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.611656 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.615937 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-system-cni-dir\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.615981 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6fc0664-5e80-440d-a6e8-4189cdf5c500-proxy-tls\") pod \"machine-config-daemon-vzgdv\" (UID: \"b6fc0664-5e80-440d-a6e8-4189cdf5c500\") " pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.615997 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-host-run-k8s-cni-cncf-io\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.616014 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-host-var-lib-cni-multus\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.616030 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b6fc0664-5e80-440d-a6e8-4189cdf5c500-rootfs\") pod \"machine-config-daemon-vzgdv\" (UID: \"b6fc0664-5e80-440d-a6e8-4189cdf5c500\") " pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.616047 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6fc0664-5e80-440d-a6e8-4189cdf5c500-mcd-auth-proxy-config\") pod \"machine-config-daemon-vzgdv\" (UID: \"b6fc0664-5e80-440d-a6e8-4189cdf5c500\") " pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.616076 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-hostroot\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.616097 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgb6l\" (UniqueName: \"kubernetes.io/projected/dac3c503-e284-4df8-ae5e-0084a884e456-kube-api-access-jgb6l\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.616129 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-etc-kubernetes\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.616153 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-host-var-lib-cni-bin\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.616179 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-os-release\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.616199 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dac3c503-e284-4df8-ae5e-0084a884e456-cni-binary-copy\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.616220 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-host-var-lib-kubelet\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.616240 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-multus-conf-dir\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.616261 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dac3c503-e284-4df8-ae5e-0084a884e456-multus-daemon-config\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.616316 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-host-run-multus-certs\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.616355 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-host-run-netns\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.616397 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqhgt\" (UniqueName: \"kubernetes.io/projected/b6fc0664-5e80-440d-a6e8-4189cdf5c500-kube-api-access-zqhgt\") pod \"machine-config-daemon-vzgdv\" (UID: \"b6fc0664-5e80-440d-a6e8-4189cdf5c500\") " pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.616425 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-multus-cni-dir\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.616460 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-cnibin\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.616483 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-multus-socket-dir-parent\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.624539 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.631554 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.639614 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.647236 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.656077 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.662110 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.669141 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.676591 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.684232 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.702178 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.716393 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f412eecc65b9e4942b9685b797d91b17d8dae998255d25c5f793a997fbd3357\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:23Z\\\",\\\"message\\\":\\\"W0130 21:43:23.273923 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:43:23.274397 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809403 cert, and key in /tmp/serving-cert-2802550774/serving-signer.crt, /tmp/serving-cert-2802550774/serving-signer.key\\\\nI0130 21:43:23.507212 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:43:23.513944 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:43:23.514152 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:23.515740 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2802550774/tls.crt::/tmp/serving-cert-2802550774/tls.key\\\\\\\"\\\\nF0130 21:43:23.817485 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.717625 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-system-cni-dir\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.717672 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-host-run-k8s-cni-cncf-io\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.717696 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-host-var-lib-cni-multus\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.717719 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6fc0664-5e80-440d-a6e8-4189cdf5c500-proxy-tls\") pod \"machine-config-daemon-vzgdv\" (UID: \"b6fc0664-5e80-440d-a6e8-4189cdf5c500\") " pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.717750 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b6fc0664-5e80-440d-a6e8-4189cdf5c500-rootfs\") pod \"machine-config-daemon-vzgdv\" (UID: \"b6fc0664-5e80-440d-a6e8-4189cdf5c500\") " pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.717773 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6fc0664-5e80-440d-a6e8-4189cdf5c500-mcd-auth-proxy-config\") pod \"machine-config-daemon-vzgdv\" (UID: \"b6fc0664-5e80-440d-a6e8-4189cdf5c500\") " pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.717806 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-host-var-lib-cni-multus\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.717799 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-hostroot\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.717812 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-host-run-k8s-cni-cncf-io\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.717853 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgb6l\" (UniqueName: \"kubernetes.io/projected/dac3c503-e284-4df8-ae5e-0084a884e456-kube-api-access-jgb6l\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.717869 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b6fc0664-5e80-440d-a6e8-4189cdf5c500-rootfs\") pod \"machine-config-daemon-vzgdv\" (UID: \"b6fc0664-5e80-440d-a6e8-4189cdf5c500\") " pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.717963 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-system-cni-dir\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.718040 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-etc-kubernetes\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.718042 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-hostroot\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.718120 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-etc-kubernetes\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.718179 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-os-release\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.718274 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dac3c503-e284-4df8-ae5e-0084a884e456-cni-binary-copy\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.718303 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-host-var-lib-cni-bin\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.718365 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-host-var-lib-kubelet\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.718386 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-multus-conf-dir\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.718404 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dac3c503-e284-4df8-ae5e-0084a884e456-multus-daemon-config\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.718462 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-host-run-multus-certs\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.718489 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-multus-cni-dir\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.718508 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-cnibin\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.718526 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-multus-socket-dir-parent\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.718543 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-host-run-netns\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.718579 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqhgt\" (UniqueName: \"kubernetes.io/projected/b6fc0664-5e80-440d-a6e8-4189cdf5c500-kube-api-access-zqhgt\") pod \"machine-config-daemon-vzgdv\" (UID: \"b6fc0664-5e80-440d-a6e8-4189cdf5c500\") " pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.718931 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-host-run-multus-certs\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.719010 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-multus-cni-dir\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.718245 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-os-release\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.719043 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-multus-socket-dir-parent\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.719054 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-cnibin\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.719063 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-host-var-lib-cni-bin\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.719087 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-host-var-lib-kubelet\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.719088 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-multus-conf-dir\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.719053 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dac3c503-e284-4df8-ae5e-0084a884e456-host-run-netns\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.719193 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6fc0664-5e80-440d-a6e8-4189cdf5c500-mcd-auth-proxy-config\") pod \"machine-config-daemon-vzgdv\" (UID: \"b6fc0664-5e80-440d-a6e8-4189cdf5c500\") " pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.719416 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dac3c503-e284-4df8-ae5e-0084a884e456-cni-binary-copy\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.719605 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dac3c503-e284-4df8-ae5e-0084a884e456-multus-daemon-config\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.739346 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqhgt\" (UniqueName: \"kubernetes.io/projected/b6fc0664-5e80-440d-a6e8-4189cdf5c500-kube-api-access-zqhgt\") pod \"machine-config-daemon-vzgdv\" (UID: \"b6fc0664-5e80-440d-a6e8-4189cdf5c500\") " pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.740309 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6fc0664-5e80-440d-a6e8-4189cdf5c500-proxy-tls\") pod \"machine-config-daemon-vzgdv\" (UID: \"b6fc0664-5e80-440d-a6e8-4189cdf5c500\") " pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.740829 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgb6l\" (UniqueName: \"kubernetes.io/projected/dac3c503-e284-4df8-ae5e-0084a884e456-kube-api-access-jgb6l\") pod \"multus-tz8jn\" (UID: \"dac3c503-e284-4df8-ae5e-0084a884e456\") " pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.821623 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.826675 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 08:34:23.089460942 +0000 UTC Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.827867 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tz8jn" Jan 30 21:43:40 crc kubenswrapper[4869]: W0130 21:43:40.835259 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6fc0664_5e80_440d_a6e8_4189cdf5c500.slice/crio-01ac7f704c1df0f40192598c3c09fcacbba313b0ddc59e3943248d52558f5010 WatchSource:0}: Error finding container 01ac7f704c1df0f40192598c3c09fcacbba313b0ddc59e3943248d52558f5010: Status 404 returned error can't find the container with id 01ac7f704c1df0f40192598c3c09fcacbba313b0ddc59e3943248d52558f5010 Jan 30 21:43:40 crc kubenswrapper[4869]: W0130 21:43:40.845356 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddac3c503_e284_4df8_ae5e_0084a884e456.slice/crio-323743c16cb0b88362607deecc40370fd752d96d1df8ab9e5db7161f5d27611c WatchSource:0}: Error finding container 323743c16cb0b88362607deecc40370fd752d96d1df8ab9e5db7161f5d27611c: Status 404 returned error can't find the container with id 323743c16cb0b88362607deecc40370fd752d96d1df8ab9e5db7161f5d27611c Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.859071 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-jdbl9"] Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.859812 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.864243 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.864262 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.865605 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-stqvf"] Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.866537 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:40 crc kubenswrapper[4869]: W0130 21:43:40.868787 4869 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.868827 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:43:40 crc kubenswrapper[4869]: W0130 21:43:40.868864 4869 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.868873 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:43:40 crc kubenswrapper[4869]: W0130 21:43:40.868942 4869 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.868954 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:43:40 crc kubenswrapper[4869]: W0130 21:43:40.868997 4869 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.869007 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:43:40 crc kubenswrapper[4869]: W0130 21:43:40.869042 4869 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.869052 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:43:40 crc kubenswrapper[4869]: W0130 21:43:40.869180 4869 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.869201 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:43:40 crc kubenswrapper[4869]: W0130 21:43:40.869234 4869 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.869244 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.878242 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.897090 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.921671 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f412eecc65b9e4942b9685b797d91b17d8dae998255d25c5f793a997fbd3357\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:23Z\\\",\\\"message\\\":\\\"W0130 21:43:23.273923 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:43:23.274397 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809403 cert, and key in /tmp/serving-cert-2802550774/serving-signer.crt, /tmp/serving-cert-2802550774/serving-signer.key\\\\nI0130 21:43:23.507212 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:43:23.513944 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:43:23.514152 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:23.515740 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2802550774/tls.crt::/tmp/serving-cert-2802550774/tls.key\\\\\\\"\\\\nF0130 21:43:23.817485 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.969187 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.979930 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" event={"ID":"b6fc0664-5e80-440d-a6e8-4189cdf5c500","Type":"ContainerStarted","Data":"01ac7f704c1df0f40192598c3c09fcacbba313b0ddc59e3943248d52558f5010"} Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.980641 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7dd6575d4e96dbd681eba39d66c23c585daa08cfb9e097e902b08d01525a4319"} Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.981672 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f"} Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.981705 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3857627caafabbf86adba004b927eb0945c6d68f37d5d96e5758c6a8e901dfa9"} Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.983459 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.985356 4869 scope.go:117] "RemoveContainer" containerID="75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3" Jan 30 21:43:40 crc kubenswrapper[4869]: E0130 21:43:40.985471 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.986160 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tz8jn" event={"ID":"dac3c503-e284-4df8-ae5e-0084a884e456","Type":"ContainerStarted","Data":"323743c16cb0b88362607deecc40370fd752d96d1df8ab9e5db7161f5d27611c"} Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.987187 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-v9n4p" event={"ID":"e94ca98e-63b4-4337-af06-7525b62333b3","Type":"ContainerStarted","Data":"c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a"} Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.987212 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-v9n4p" event={"ID":"e94ca98e-63b4-4337-af06-7525b62333b3","Type":"ContainerStarted","Data":"5d502e8c8697c0a167b80e8d79de3480681b4f746e379f697a695e530c6aca00"} Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.988739 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439"} Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.988768 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83"} Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.988780 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e6e9ef511a7926b50e2042ae2ea1b8eeb0e9bc58753a60877cd5dfb51c4560a8"} Jan 30 21:43:40 crc kubenswrapper[4869]: I0130 21:43:40.992519 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.020170 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.021659 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-run-systemd\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.021695 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng79r\" (UniqueName: \"kubernetes.io/projected/ecd656f9-7188-4998-8195-c2ec92442b7d-kube-api-access-ng79r\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.021716 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-slash\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.021730 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-log-socket\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.021760 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-var-lib-openvswitch\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.021775 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ecd656f9-7188-4998-8195-c2ec92442b7d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.021791 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-run-ovn-kubernetes\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.021808 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovnkube-config\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.021823 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-run-openvswitch\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.021846 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-systemd-units\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.021868 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ecd656f9-7188-4998-8195-c2ec92442b7d-cnibin\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.021884 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-run-ovn\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.021912 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ecd656f9-7188-4998-8195-c2ec92442b7d-os-release\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.021930 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ecd656f9-7188-4998-8195-c2ec92442b7d-cni-binary-copy\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.021946 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-cni-netd\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.021960 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.021976 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovnkube-script-lib\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.021991 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8w6z\" (UniqueName: \"kubernetes.io/projected/c39d4fe5-06cd-4ea4-8336-bd481332c475-kube-api-access-j8w6z\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.022007 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-env-overrides\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.022020 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ecd656f9-7188-4998-8195-c2ec92442b7d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.022033 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-cni-bin\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.022048 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ecd656f9-7188-4998-8195-c2ec92442b7d-system-cni-dir\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.022071 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-run-netns\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.022085 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-etc-openvswitch\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.022097 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-node-log\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.022132 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-kubelet\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.022146 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovn-node-metrics-cert\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.046987 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.088637 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.122687 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ecd656f9-7188-4998-8195-c2ec92442b7d-os-release\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.122729 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ecd656f9-7188-4998-8195-c2ec92442b7d-cni-binary-copy\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.122785 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-run-ovn\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.122801 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-cni-netd\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.122819 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.122838 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovnkube-script-lib\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.122852 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8w6z\" (UniqueName: \"kubernetes.io/projected/c39d4fe5-06cd-4ea4-8336-bd481332c475-kube-api-access-j8w6z\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.122905 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ecd656f9-7188-4998-8195-c2ec92442b7d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.122933 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-env-overrides\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.122950 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-cni-bin\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.122965 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ecd656f9-7188-4998-8195-c2ec92442b7d-system-cni-dir\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.122995 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-run-netns\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123009 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-etc-openvswitch\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123023 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-node-log\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123081 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-kubelet\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123097 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovn-node-metrics-cert\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123117 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-run-systemd\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123131 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-slash\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123146 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-log-socket\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123161 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng79r\" (UniqueName: \"kubernetes.io/projected/ecd656f9-7188-4998-8195-c2ec92442b7d-kube-api-access-ng79r\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123196 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-var-lib-openvswitch\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123224 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ecd656f9-7188-4998-8195-c2ec92442b7d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123244 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-run-ovn-kubernetes\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123262 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovnkube-config\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123283 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-run-openvswitch\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123315 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-systemd-units\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123362 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ecd656f9-7188-4998-8195-c2ec92442b7d-cnibin\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123429 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ecd656f9-7188-4998-8195-c2ec92442b7d-cnibin\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123551 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ecd656f9-7188-4998-8195-c2ec92442b7d-os-release\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123595 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-var-lib-openvswitch\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123611 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-run-systemd\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123640 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-log-socket\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123655 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-kubelet\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123682 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-slash\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123716 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-run-ovn-kubernetes\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123773 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123787 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-systemd-units\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123808 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-run-ovn\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123832 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-cni-netd\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123848 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ecd656f9-7188-4998-8195-c2ec92442b7d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123829 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-run-openvswitch\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.123953 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-etc-openvswitch\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.124006 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-cni-bin\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.124018 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-run-netns\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.124069 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ecd656f9-7188-4998-8195-c2ec92442b7d-system-cni-dir\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.124075 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-node-log\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.124069 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ecd656f9-7188-4998-8195-c2ec92442b7d-cni-binary-copy\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.124952 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ecd656f9-7188-4998-8195-c2ec92442b7d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.128747 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.167788 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng79r\" (UniqueName: \"kubernetes.io/projected/ecd656f9-7188-4998-8195-c2ec92442b7d-kube-api-access-ng79r\") pod \"multus-additional-cni-plugins-jdbl9\" (UID: \"ecd656f9-7188-4998-8195-c2ec92442b7d\") " pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.207303 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.251448 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.292656 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.309933 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.339314 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.368154 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.410820 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.429282 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:43:41 crc kubenswrapper[4869]: E0130 21:43:41.429513 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:43:43.429495588 +0000 UTC m=+24.315253603 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.454524 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.490037 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.530750 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.530803 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.530837 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.530871 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:43:41 crc kubenswrapper[4869]: E0130 21:43:41.531033 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:43:41 crc kubenswrapper[4869]: E0130 21:43:41.531060 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:43:41 crc kubenswrapper[4869]: E0130 21:43:41.531098 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:41 crc kubenswrapper[4869]: E0130 21:43:41.531118 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:43:41 crc kubenswrapper[4869]: E0130 21:43:41.531155 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:43.531133885 +0000 UTC m=+24.416891910 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:41 crc kubenswrapper[4869]: E0130 21:43:41.531174 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:43.531166516 +0000 UTC m=+24.416924631 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:43:41 crc kubenswrapper[4869]: E0130 21:43:41.531046 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:43:41 crc kubenswrapper[4869]: E0130 21:43:41.531185 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:43:41 crc kubenswrapper[4869]: E0130 21:43:41.531193 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:43:41 crc kubenswrapper[4869]: E0130 21:43:41.531204 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:41 crc kubenswrapper[4869]: E0130 21:43:41.531219 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:43.531202897 +0000 UTC m=+24.416960922 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:43:41 crc kubenswrapper[4869]: E0130 21:43:41.531233 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:43.531227458 +0000 UTC m=+24.416985483 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.534030 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.568755 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.609782 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.651691 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.688712 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.700171 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.750700 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.798462 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.827662 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 06:41:40.068059622 +0000 UTC Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.832048 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.874213 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.876221 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.876240 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.876216 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:41 crc kubenswrapper[4869]: E0130 21:43:41.876321 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:43:41 crc kubenswrapper[4869]: E0130 21:43:41.876393 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:43:41 crc kubenswrapper[4869]: E0130 21:43:41.876478 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.880193 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.880700 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.881888 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.882535 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.883171 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.883571 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.884095 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.884687 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.884968 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovnkube-config\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.885618 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.886250 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.887185 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.887732 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.888988 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8w6z\" (UniqueName: \"kubernetes.io/projected/c39d4fe5-06cd-4ea4-8336-bd481332c475-kube-api-access-j8w6z\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.889262 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.889748 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.890252 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.891145 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.891779 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.892682 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.893077 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.893646 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.894780 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.895317 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.896323 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.896741 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.897904 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.898344 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.898973 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.900056 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.900495 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.901471 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.901952 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.902744 4869 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.902836 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.904449 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.905328 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.905708 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.907092 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.907733 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.908611 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.909529 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.910582 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.911105 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.912080 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.912838 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.913775 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.914266 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.915193 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.915745 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.916779 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.917430 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.918288 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.918780 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.919710 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.920291 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.920805 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.993192 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" event={"ID":"b6fc0664-5e80-440d-a6e8-4189cdf5c500","Type":"ContainerStarted","Data":"a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec"} Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.993249 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" event={"ID":"b6fc0664-5e80-440d-a6e8-4189cdf5c500","Type":"ContainerStarted","Data":"30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2"} Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.994539 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tz8jn" event={"ID":"dac3c503-e284-4df8-ae5e-0084a884e456","Type":"ContainerStarted","Data":"6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061"} Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.996482 4869 generic.go:334] "Generic (PLEG): container finished" podID="ecd656f9-7188-4998-8195-c2ec92442b7d" containerID="3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538" exitCode=0 Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.996558 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" event={"ID":"ecd656f9-7188-4998-8195-c2ec92442b7d","Type":"ContainerDied","Data":"3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538"} Jan 30 21:43:41 crc kubenswrapper[4869]: I0130 21:43:41.996613 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" event={"ID":"ecd656f9-7188-4998-8195-c2ec92442b7d","Type":"ContainerStarted","Data":"195b50647fafaff5d228afeabff6e4008f6706c1910be8c7b8be24daa6379c61"} Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.010840 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.025528 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.040355 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.044464 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.079700 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.094966 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: E0130 21:43:42.124547 4869 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Jan 30 21:43:42 crc kubenswrapper[4869]: E0130 21:43:42.124551 4869 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Jan 30 21:43:42 crc kubenswrapper[4869]: E0130 21:43:42.124633 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovnkube-script-lib podName:c39d4fe5-06cd-4ea4-8336-bd481332c475 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:42.624612904 +0000 UTC m=+23.510370929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovnkube-script-lib") pod "ovnkube-node-stqvf" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475") : failed to sync configmap cache: timed out waiting for the condition Jan 30 21:43:42 crc kubenswrapper[4869]: E0130 21:43:42.124672 4869 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 21:43:42 crc kubenswrapper[4869]: E0130 21:43:42.124719 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-env-overrides podName:c39d4fe5-06cd-4ea4-8336-bd481332c475 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:42.624683446 +0000 UTC m=+23.510441511 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-env-overrides") pod "ovnkube-node-stqvf" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475") : failed to sync configmap cache: timed out waiting for the condition Jan 30 21:43:42 crc kubenswrapper[4869]: E0130 21:43:42.124760 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovn-node-metrics-cert podName:c39d4fe5-06cd-4ea4-8336-bd481332c475 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:42.624744998 +0000 UTC m=+23.510503053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovn-node-metrics-cert") pod "ovnkube-node-stqvf" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475") : failed to sync secret cache: timed out waiting for the condition Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.132160 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.140577 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.192990 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.234852 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.240818 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.288939 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.300587 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.348056 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.386157 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.439272 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.476288 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.510217 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.546946 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.588691 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.630405 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.639208 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovnkube-script-lib\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.639277 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-env-overrides\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.639335 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovn-node-metrics-cert\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.639857 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-env-overrides\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.639974 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovnkube-script-lib\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.645265 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovn-node-metrics-cert\") pod \"ovnkube-node-stqvf\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.669310 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.706885 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.749818 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.793255 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.811662 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:42 crc kubenswrapper[4869]: W0130 21:43:42.824573 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc39d4fe5_06cd_4ea4_8336_bd481332c475.slice/crio-3bde4b41d25105d1ae8bde167dfae92a242e3e870901cd1a1cc1fc2bbdc235bb WatchSource:0}: Error finding container 3bde4b41d25105d1ae8bde167dfae92a242e3e870901cd1a1cc1fc2bbdc235bb: Status 404 returned error can't find the container with id 3bde4b41d25105d1ae8bde167dfae92a242e3e870901cd1a1cc1fc2bbdc235bb Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.827911 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 21:37:10.171386522 +0000 UTC Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.833982 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.870280 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.908722 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.947741 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.994038 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:42 crc kubenswrapper[4869]: I0130 21:43:42.999693 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerStarted","Data":"3bde4b41d25105d1ae8bde167dfae92a242e3e870901cd1a1cc1fc2bbdc235bb"} Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.001362 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e"} Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.003378 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" event={"ID":"ecd656f9-7188-4998-8195-c2ec92442b7d","Type":"ContainerStarted","Data":"ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b"} Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.030301 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.067015 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.107957 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.152971 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.189384 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.228923 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.267794 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.308200 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.351160 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.387241 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.426200 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.447009 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:43:43 crc kubenswrapper[4869]: E0130 21:43:43.447180 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:43:47.447154223 +0000 UTC m=+28.332912268 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.471153 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.508828 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.547968 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.548005 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.548039 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.548064 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:43 crc kubenswrapper[4869]: E0130 21:43:43.548113 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:43:43 crc kubenswrapper[4869]: E0130 21:43:43.548151 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:43:43 crc kubenswrapper[4869]: E0130 21:43:43.548165 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:43:43 crc kubenswrapper[4869]: E0130 21:43:43.548183 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:43:43 crc kubenswrapper[4869]: E0130 21:43:43.548196 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:43 crc kubenswrapper[4869]: E0130 21:43:43.548196 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:47.548174301 +0000 UTC m=+28.433932336 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:43:43 crc kubenswrapper[4869]: E0130 21:43:43.548246 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:43:43 crc kubenswrapper[4869]: E0130 21:43:43.548282 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:43:43 crc kubenswrapper[4869]: E0130 21:43:43.548301 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:43 crc kubenswrapper[4869]: E0130 21:43:43.548251 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:47.548233263 +0000 UTC m=+28.433991388 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:43:43 crc kubenswrapper[4869]: E0130 21:43:43.548419 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:47.548401118 +0000 UTC m=+28.434159153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:43 crc kubenswrapper[4869]: E0130 21:43:43.548465 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:47.548431339 +0000 UTC m=+28.434189474 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.558702 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.590519 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.628885 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.669299 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.708527 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.748389 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.787963 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.827116 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.828044 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:44:41.585052913 +0000 UTC Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.856285 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-c24fb"] Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.856694 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-c24fb" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.871827 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.875999 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.876043 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.875999 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:43:43 crc kubenswrapper[4869]: E0130 21:43:43.876108 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:43:43 crc kubenswrapper[4869]: E0130 21:43:43.876203 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:43:43 crc kubenswrapper[4869]: E0130 21:43:43.876280 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.881006 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.900769 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.920726 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.941655 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.951489 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4a9e9fe8-01db-40b0-bdd8-e3d626df037f-serviceca\") pod \"node-ca-c24fb\" (UID: \"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\") " pod="openshift-image-registry/node-ca-c24fb" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.951653 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p2pd\" (UniqueName: \"kubernetes.io/projected/4a9e9fe8-01db-40b0-bdd8-e3d626df037f-kube-api-access-9p2pd\") pod \"node-ca-c24fb\" (UID: \"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\") " pod="openshift-image-registry/node-ca-c24fb" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.951692 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a9e9fe8-01db-40b0-bdd8-e3d626df037f-host\") pod \"node-ca-c24fb\" (UID: \"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\") " pod="openshift-image-registry/node-ca-c24fb" Jan 30 21:43:43 crc kubenswrapper[4869]: I0130 21:43:43.989233 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.008223 4869 generic.go:334] "Generic (PLEG): container finished" podID="ecd656f9-7188-4998-8195-c2ec92442b7d" containerID="ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b" exitCode=0 Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.008291 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" event={"ID":"ecd656f9-7188-4998-8195-c2ec92442b7d","Type":"ContainerDied","Data":"ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b"} Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.009363 4869 generic.go:334] "Generic (PLEG): container finished" podID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerID="892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6" exitCode=0 Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.009682 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerDied","Data":"892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6"} Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.031390 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.052703 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p2pd\" (UniqueName: \"kubernetes.io/projected/4a9e9fe8-01db-40b0-bdd8-e3d626df037f-kube-api-access-9p2pd\") pod \"node-ca-c24fb\" (UID: \"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\") " pod="openshift-image-registry/node-ca-c24fb" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.052759 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a9e9fe8-01db-40b0-bdd8-e3d626df037f-host\") pod \"node-ca-c24fb\" (UID: \"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\") " pod="openshift-image-registry/node-ca-c24fb" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.053030 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a9e9fe8-01db-40b0-bdd8-e3d626df037f-host\") pod \"node-ca-c24fb\" (UID: \"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\") " pod="openshift-image-registry/node-ca-c24fb" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.053214 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4a9e9fe8-01db-40b0-bdd8-e3d626df037f-serviceca\") pod \"node-ca-c24fb\" (UID: \"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\") " pod="openshift-image-registry/node-ca-c24fb" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.054862 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4a9e9fe8-01db-40b0-bdd8-e3d626df037f-serviceca\") pod \"node-ca-c24fb\" (UID: \"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\") " pod="openshift-image-registry/node-ca-c24fb" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.068698 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.097833 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p2pd\" (UniqueName: \"kubernetes.io/projected/4a9e9fe8-01db-40b0-bdd8-e3d626df037f-kube-api-access-9p2pd\") pod \"node-ca-c24fb\" (UID: \"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\") " pod="openshift-image-registry/node-ca-c24fb" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.130152 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.167610 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.169782 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-c24fb" Jan 30 21:43:44 crc kubenswrapper[4869]: W0130 21:43:44.182597 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a9e9fe8_01db_40b0_bdd8_e3d626df037f.slice/crio-0c95d748fa7f9c6c05c9342ce0a1d3d9b0280d3b36fbd62751bb478c949c43a6 WatchSource:0}: Error finding container 0c95d748fa7f9c6c05c9342ce0a1d3d9b0280d3b36fbd62751bb478c949c43a6: Status 404 returned error can't find the container with id 0c95d748fa7f9c6c05c9342ce0a1d3d9b0280d3b36fbd62751bb478c949c43a6 Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.210481 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.250681 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.288542 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.328872 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.367534 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.409750 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.450303 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.487483 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.532527 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.575645 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.608481 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.649270 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.689608 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:44 crc kubenswrapper[4869]: I0130 21:43:44.828657 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:53:24.945864465 +0000 UTC Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.020886 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerStarted","Data":"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86"} Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.020941 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerStarted","Data":"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09"} Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.020951 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerStarted","Data":"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36"} Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.020960 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerStarted","Data":"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5"} Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.020973 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerStarted","Data":"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8"} Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.020982 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerStarted","Data":"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3"} Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.023843 4869 generic.go:334] "Generic (PLEG): container finished" podID="ecd656f9-7188-4998-8195-c2ec92442b7d" containerID="c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e" exitCode=0 Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.023912 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" event={"ID":"ecd656f9-7188-4998-8195-c2ec92442b7d","Type":"ContainerDied","Data":"c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e"} Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.027021 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-c24fb" event={"ID":"4a9e9fe8-01db-40b0-bdd8-e3d626df037f","Type":"ContainerStarted","Data":"c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1"} Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.027156 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-c24fb" event={"ID":"4a9e9fe8-01db-40b0-bdd8-e3d626df037f","Type":"ContainerStarted","Data":"0c95d748fa7f9c6c05c9342ce0a1d3d9b0280d3b36fbd62751bb478c949c43a6"} Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.044921 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.061070 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.074676 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.089580 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.099905 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.112367 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.124880 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.143998 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.164596 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.177693 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.194654 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.218636 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.236196 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.250184 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.288160 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.336611 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.370017 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.410445 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.453520 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.494076 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.538811 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.573059 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.589242 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.591299 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.591359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.591378 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.591540 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.610563 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.661134 4869 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.661403 4869 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.662426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.662464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.662472 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.662487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.662498 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:45Z","lastTransitionTime":"2026-01-30T21:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:45 crc kubenswrapper[4869]: E0130 21:43:45.681117 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.685330 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.685400 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.685414 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.685433 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.685446 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:45Z","lastTransitionTime":"2026-01-30T21:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.690775 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: E0130 21:43:45.698672 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.703292 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.703336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.703345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.703360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.703372 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:45Z","lastTransitionTime":"2026-01-30T21:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:45 crc kubenswrapper[4869]: E0130 21:43:45.715843 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.718882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.718939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.718952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.718969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.718980 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:45Z","lastTransitionTime":"2026-01-30T21:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.728358 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: E0130 21:43:45.729350 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.733492 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.733530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.733543 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.733561 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.733574 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:45Z","lastTransitionTime":"2026-01-30T21:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:45 crc kubenswrapper[4869]: E0130 21:43:45.744231 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: E0130 21:43:45.744383 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.745804 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.745837 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.745846 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.745861 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.745873 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:45Z","lastTransitionTime":"2026-01-30T21:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.767547 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.808499 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.829357 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 06:21:55.202962316 +0000 UTC Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.846727 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.848057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.848100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.848109 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.848124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.848134 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:45Z","lastTransitionTime":"2026-01-30T21:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.876696 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.876761 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:45 crc kubenswrapper[4869]: E0130 21:43:45.876819 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.876864 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:45 crc kubenswrapper[4869]: E0130 21:43:45.876985 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:43:45 crc kubenswrapper[4869]: E0130 21:43:45.877031 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.950502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.950541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.950555 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.950572 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:45 crc kubenswrapper[4869]: I0130 21:43:45.950589 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:45Z","lastTransitionTime":"2026-01-30T21:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.032581 4869 generic.go:334] "Generic (PLEG): container finished" podID="ecd656f9-7188-4998-8195-c2ec92442b7d" containerID="3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f" exitCode=0 Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.032753 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" event={"ID":"ecd656f9-7188-4998-8195-c2ec92442b7d","Type":"ContainerDied","Data":"3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f"} Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.053263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.053302 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.053314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.053330 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.053342 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:46Z","lastTransitionTime":"2026-01-30T21:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.054738 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.080797 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.094983 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.113771 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.126773 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.138300 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.149043 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.155167 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.155195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.155202 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.155216 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.155224 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:46Z","lastTransitionTime":"2026-01-30T21:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.172992 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.207882 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.249187 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.256659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.256681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.256690 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.256703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.256712 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:46Z","lastTransitionTime":"2026-01-30T21:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.289954 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.341855 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.342951 4869 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.361143 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.361176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.361187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.361205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.361216 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:46Z","lastTransitionTime":"2026-01-30T21:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.392770 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.428617 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.463357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.463397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.463406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.463422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.463432 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:46Z","lastTransitionTime":"2026-01-30T21:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.565331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.565367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.565377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.565393 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.565406 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:46Z","lastTransitionTime":"2026-01-30T21:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.667471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.667498 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.667506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.667519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.667528 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:46Z","lastTransitionTime":"2026-01-30T21:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.769615 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.769643 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.769651 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.769665 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.769673 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:46Z","lastTransitionTime":"2026-01-30T21:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.871928 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.871972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.871993 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.872015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.872031 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:46Z","lastTransitionTime":"2026-01-30T21:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.913288 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:46:40.848083008 +0000 UTC Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.974212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.974250 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.974262 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.974278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:46 crc kubenswrapper[4869]: I0130 21:43:46.974290 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:46Z","lastTransitionTime":"2026-01-30T21:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.039064 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" event={"ID":"ecd656f9-7188-4998-8195-c2ec92442b7d","Type":"ContainerStarted","Data":"17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1"} Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.061193 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.077128 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.077187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.077199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.077216 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.077198 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.077228 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:47Z","lastTransitionTime":"2026-01-30T21:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.093045 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.109011 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.130183 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.146462 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.164394 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.176777 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.179463 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.179502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.179510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.179526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.179535 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:47Z","lastTransitionTime":"2026-01-30T21:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.199656 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.219205 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.236434 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.250858 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.293241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.293297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.293312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.293335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.293352 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:47Z","lastTransitionTime":"2026-01-30T21:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.294666 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.306959 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.395767 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.395800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.395809 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.395823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.395833 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:47Z","lastTransitionTime":"2026-01-30T21:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.498438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.498493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.498507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.498527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.498540 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:47Z","lastTransitionTime":"2026-01-30T21:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.522150 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:43:47 crc kubenswrapper[4869]: E0130 21:43:47.522305 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:43:55.522281558 +0000 UTC m=+36.408039583 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.603111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.603545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.603582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.603601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.603620 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:47Z","lastTransitionTime":"2026-01-30T21:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.609965 4869 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.623946 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.624011 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.624060 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.624105 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:43:47 crc kubenswrapper[4869]: E0130 21:43:47.624283 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:43:47 crc kubenswrapper[4869]: E0130 21:43:47.624317 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:43:47 crc kubenswrapper[4869]: E0130 21:43:47.624338 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:47 crc kubenswrapper[4869]: E0130 21:43:47.624409 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:55.624382048 +0000 UTC m=+36.510140113 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:47 crc kubenswrapper[4869]: E0130 21:43:47.624498 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:43:47 crc kubenswrapper[4869]: E0130 21:43:47.624572 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:43:47 crc kubenswrapper[4869]: E0130 21:43:47.624608 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:55.624583034 +0000 UTC m=+36.510341079 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:43:47 crc kubenswrapper[4869]: E0130 21:43:47.624629 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:43:47 crc kubenswrapper[4869]: E0130 21:43:47.624652 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:47 crc kubenswrapper[4869]: E0130 21:43:47.624746 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:55.624715008 +0000 UTC m=+36.510473073 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:47 crc kubenswrapper[4869]: E0130 21:43:47.624878 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:43:47 crc kubenswrapper[4869]: E0130 21:43:47.625045 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:43:55.625031538 +0000 UTC m=+36.510789573 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.706654 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.706703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.706716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.706740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.706756 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:47Z","lastTransitionTime":"2026-01-30T21:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.809888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.810174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.810285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.810406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.810533 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:47Z","lastTransitionTime":"2026-01-30T21:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.876250 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.876321 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.876285 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:47 crc kubenswrapper[4869]: E0130 21:43:47.876544 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:43:47 crc kubenswrapper[4869]: E0130 21:43:47.876618 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:43:47 crc kubenswrapper[4869]: E0130 21:43:47.876696 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.913461 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 15:09:37.385866178 +0000 UTC Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.914233 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.914279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.914294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.914312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.914326 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:47Z","lastTransitionTime":"2026-01-30T21:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.918673 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:47 crc kubenswrapper[4869]: I0130 21:43:47.919465 4869 scope.go:117] "RemoveContainer" containerID="75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3" Jan 30 21:43:47 crc kubenswrapper[4869]: E0130 21:43:47.919625 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.016830 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.017092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.017213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.017320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.017409 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:48Z","lastTransitionTime":"2026-01-30T21:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.049651 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerStarted","Data":"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e"} Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.119743 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.119814 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.119841 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.119874 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.119983 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:48Z","lastTransitionTime":"2026-01-30T21:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.223718 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.223778 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.223797 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.223826 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.223851 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:48Z","lastTransitionTime":"2026-01-30T21:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.327407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.327719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.327854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.328014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.328215 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:48Z","lastTransitionTime":"2026-01-30T21:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.432188 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.432260 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.432287 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.432320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.432345 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:48Z","lastTransitionTime":"2026-01-30T21:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.534621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.534879 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.535000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.535093 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.535175 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:48Z","lastTransitionTime":"2026-01-30T21:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.638105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.638141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.638150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.638164 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.638174 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:48Z","lastTransitionTime":"2026-01-30T21:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.651809 4869 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.740256 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.740299 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.740317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.740334 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.740366 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:48Z","lastTransitionTime":"2026-01-30T21:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.778257 4869 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.842789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.842857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.842878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.842970 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.842998 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:48Z","lastTransitionTime":"2026-01-30T21:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.913940 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:09:12.767624693 +0000 UTC Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.946090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.946142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.946151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.946166 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:48 crc kubenswrapper[4869]: I0130 21:43:48.946175 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:48Z","lastTransitionTime":"2026-01-30T21:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.049032 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.049063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.049071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.049083 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.049092 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:49Z","lastTransitionTime":"2026-01-30T21:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.151492 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.151539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.151557 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.151584 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.151602 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:49Z","lastTransitionTime":"2026-01-30T21:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.254644 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.254712 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.254737 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.254768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.254792 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:49Z","lastTransitionTime":"2026-01-30T21:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.357229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.357274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.357293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.357311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.357325 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:49Z","lastTransitionTime":"2026-01-30T21:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.459474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.459515 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.459527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.459546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.459559 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:49Z","lastTransitionTime":"2026-01-30T21:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.561599 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.561634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.561647 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.561660 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.561669 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:49Z","lastTransitionTime":"2026-01-30T21:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.663845 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.663936 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.663954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.663977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.663993 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:49Z","lastTransitionTime":"2026-01-30T21:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.807663 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.807924 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.808073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.808171 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.808252 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:49Z","lastTransitionTime":"2026-01-30T21:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.876726 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.876792 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.876806 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:43:49 crc kubenswrapper[4869]: E0130 21:43:49.876946 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:43:49 crc kubenswrapper[4869]: E0130 21:43:49.877041 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:43:49 crc kubenswrapper[4869]: E0130 21:43:49.877229 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.891020 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.908416 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.910547 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.910707 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.910839 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.911005 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.911124 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:49Z","lastTransitionTime":"2026-01-30T21:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.914316 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 11:41:40.784581141 +0000 UTC Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.920513 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.934176 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.957402 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.970466 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.984272 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:49 crc kubenswrapper[4869]: I0130 21:43:49.997890 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.011529 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.012815 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.012984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.013075 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.013191 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.013310 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:50Z","lastTransitionTime":"2026-01-30T21:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.032542 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.042607 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.056560 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.060529 4869 generic.go:334] "Generic (PLEG): container finished" podID="ecd656f9-7188-4998-8195-c2ec92442b7d" containerID="17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1" exitCode=0 Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.060569 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" event={"ID":"ecd656f9-7188-4998-8195-c2ec92442b7d","Type":"ContainerDied","Data":"17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1"} Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.070282 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.083530 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.097641 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.114629 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.118598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.118627 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.118636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.118654 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.118666 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:50Z","lastTransitionTime":"2026-01-30T21:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.132606 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.148728 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.158438 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.177265 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.191797 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.207149 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.217663 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.221990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.222025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.222035 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.222051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.222062 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:50Z","lastTransitionTime":"2026-01-30T21:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.234011 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.250044 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.261563 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.271933 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.282841 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.324176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.324259 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.324270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.324284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.324292 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:50Z","lastTransitionTime":"2026-01-30T21:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.426689 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.426736 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.426748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.426766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.426780 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:50Z","lastTransitionTime":"2026-01-30T21:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.530100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.530137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.530145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.530160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.530170 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:50Z","lastTransitionTime":"2026-01-30T21:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.632926 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.633214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.633225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.633240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.633250 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:50Z","lastTransitionTime":"2026-01-30T21:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.738138 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.738165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.738175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.738189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.738198 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:50Z","lastTransitionTime":"2026-01-30T21:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.839775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.839808 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.839817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.839830 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.839839 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:50Z","lastTransitionTime":"2026-01-30T21:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.915020 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:54:45.096373214 +0000 UTC Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.941943 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.941977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.941989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.942004 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:50 crc kubenswrapper[4869]: I0130 21:43:50.942015 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:50Z","lastTransitionTime":"2026-01-30T21:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.044319 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.044546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.044648 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.044779 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.045018 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:51Z","lastTransitionTime":"2026-01-30T21:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.068448 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerStarted","Data":"c4f3e3e5f9a768662c4da62f383068742f43a32394db79e2a0302c99dbe53097"} Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.068923 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.072827 4869 generic.go:334] "Generic (PLEG): container finished" podID="ecd656f9-7188-4998-8195-c2ec92442b7d" containerID="5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217" exitCode=0 Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.072931 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" event={"ID":"ecd656f9-7188-4998-8195-c2ec92442b7d","Type":"ContainerDied","Data":"5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217"} Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.082387 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.089484 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.094276 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.110492 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f3e3e5f9a768662c4da62f383068742f43a32394db79e2a0302c99dbe53097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.125763 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.140926 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.147397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.147433 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.147442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.147455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.147464 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:51Z","lastTransitionTime":"2026-01-30T21:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.154385 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.169429 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.181922 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.193142 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.205467 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.214329 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.238105 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.249983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.250021 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.250062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.250080 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.250089 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:51Z","lastTransitionTime":"2026-01-30T21:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.254478 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.264726 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.280012 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.292546 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.315697 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f3e3e5f9a768662c4da62f383068742f43a32394db79e2a0302c99dbe53097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.329856 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.341879 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.352224 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.352252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.352262 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.352276 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.352286 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:51Z","lastTransitionTime":"2026-01-30T21:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.352770 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.365116 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.379222 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.390792 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.401083 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.411278 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.421362 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.433912 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.442416 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.454045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.454082 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.454091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.454103 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.454112 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:51Z","lastTransitionTime":"2026-01-30T21:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.555874 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.555933 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.555950 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.555966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.555976 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:51Z","lastTransitionTime":"2026-01-30T21:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.661542 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.661588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.661605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.661627 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.661643 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:51Z","lastTransitionTime":"2026-01-30T21:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.763373 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.763399 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.763407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.763419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.763428 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:51Z","lastTransitionTime":"2026-01-30T21:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.865943 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.865977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.865988 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.866001 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.866010 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:51Z","lastTransitionTime":"2026-01-30T21:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.875874 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.875963 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:43:51 crc kubenswrapper[4869]: E0130 21:43:51.875983 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.875870 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:51 crc kubenswrapper[4869]: E0130 21:43:51.876075 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:43:51 crc kubenswrapper[4869]: E0130 21:43:51.876170 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.915816 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:15:24.36014969 +0000 UTC Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.968729 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.968787 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.968806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.968830 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:51 crc kubenswrapper[4869]: I0130 21:43:51.968846 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:51Z","lastTransitionTime":"2026-01-30T21:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.071370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.071431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.071449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.071474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.071491 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:52Z","lastTransitionTime":"2026-01-30T21:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.079082 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" event={"ID":"ecd656f9-7188-4998-8195-c2ec92442b7d","Type":"ContainerStarted","Data":"eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62"} Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.079189 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.079752 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.095728 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.100026 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.108881 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.121455 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.133013 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.145964 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.161222 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.172989 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.176942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.176987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.176999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.177015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.177029 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:52Z","lastTransitionTime":"2026-01-30T21:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.186322 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.201739 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.213168 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.227496 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.243540 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.263243 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f3e3e5f9a768662c4da62f383068742f43a32394db79e2a0302c99dbe53097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.278491 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.279427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.279459 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.279469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.279517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.279527 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:52Z","lastTransitionTime":"2026-01-30T21:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.290310 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.311821 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.343399 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.371251 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f3e3e5f9a768662c4da62f383068742f43a32394db79e2a0302c99dbe53097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.381400 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.381440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.381452 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.381469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.381482 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:52Z","lastTransitionTime":"2026-01-30T21:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.384861 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.399175 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.409729 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.421267 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.430654 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.442612 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.451070 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.462677 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.474454 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.483034 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.483083 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.483096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.483115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.483128 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:52Z","lastTransitionTime":"2026-01-30T21:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.484388 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.585158 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.585189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.585198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.585212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.585222 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:52Z","lastTransitionTime":"2026-01-30T21:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.688377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.688418 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.688431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.688448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.688461 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:52Z","lastTransitionTime":"2026-01-30T21:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.790439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.790503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.790527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.790559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.790582 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:52Z","lastTransitionTime":"2026-01-30T21:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.893235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.893278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.893293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.893314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.893330 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:52Z","lastTransitionTime":"2026-01-30T21:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.916488 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 20:49:44.819088308 +0000 UTC Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.996775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.996839 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.996852 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.996870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:52 crc kubenswrapper[4869]: I0130 21:43:52.996882 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:52Z","lastTransitionTime":"2026-01-30T21:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.082308 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.099007 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.099046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.099062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.099084 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.099101 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:53Z","lastTransitionTime":"2026-01-30T21:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.201789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.201825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.201834 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.201848 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.201857 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:53Z","lastTransitionTime":"2026-01-30T21:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.304426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.304464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.304472 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.304485 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.304495 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:53Z","lastTransitionTime":"2026-01-30T21:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.406705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.406742 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.406751 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.406765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.406775 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:53Z","lastTransitionTime":"2026-01-30T21:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.508592 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.508639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.508654 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.508676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.508694 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:53Z","lastTransitionTime":"2026-01-30T21:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.508803 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn"] Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.509350 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.511608 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.511958 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.524260 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.544442 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.555008 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.565414 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.578209 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.592437 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e92fcbf8-b6b4-4531-a7cf-ed59225dd821-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qnlzn\" (UID: \"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.592512 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e92fcbf8-b6b4-4531-a7cf-ed59225dd821-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qnlzn\" (UID: \"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.592554 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tsq2\" (UniqueName: \"kubernetes.io/projected/e92fcbf8-b6b4-4531-a7cf-ed59225dd821-kube-api-access-6tsq2\") pod \"ovnkube-control-plane-749d76644c-qnlzn\" (UID: \"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.592605 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e92fcbf8-b6b4-4531-a7cf-ed59225dd821-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qnlzn\" (UID: \"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.594577 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.607218 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.610550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.610580 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.610593 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.610610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.610620 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:53Z","lastTransitionTime":"2026-01-30T21:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.619766 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.636430 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f3e3e5f9a768662c4da62f383068742f43a32394db79e2a0302c99dbe53097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.650803 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.672566 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.685857 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.693752 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tsq2\" (UniqueName: \"kubernetes.io/projected/e92fcbf8-b6b4-4531-a7cf-ed59225dd821-kube-api-access-6tsq2\") pod \"ovnkube-control-plane-749d76644c-qnlzn\" (UID: \"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.693802 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e92fcbf8-b6b4-4531-a7cf-ed59225dd821-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qnlzn\" (UID: \"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.694331 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e92fcbf8-b6b4-4531-a7cf-ed59225dd821-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qnlzn\" (UID: \"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.694399 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e92fcbf8-b6b4-4531-a7cf-ed59225dd821-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qnlzn\" (UID: \"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.694403 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e92fcbf8-b6b4-4531-a7cf-ed59225dd821-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qnlzn\" (UID: \"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.695153 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e92fcbf8-b6b4-4531-a7cf-ed59225dd821-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qnlzn\" (UID: \"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.701036 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e92fcbf8-b6b4-4531-a7cf-ed59225dd821-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qnlzn\" (UID: \"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.707410 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.712089 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tsq2\" (UniqueName: \"kubernetes.io/projected/e92fcbf8-b6b4-4531-a7cf-ed59225dd821-kube-api-access-6tsq2\") pod \"ovnkube-control-plane-749d76644c-qnlzn\" (UID: \"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.713404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.713449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.713461 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.713478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.713491 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:53Z","lastTransitionTime":"2026-01-30T21:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.722985 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.733164 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.820789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.820832 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.820841 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.820856 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.820865 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:53Z","lastTransitionTime":"2026-01-30T21:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.821075 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" Jan 30 21:43:53 crc kubenswrapper[4869]: W0130 21:43:53.833370 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode92fcbf8_b6b4_4531_a7cf_ed59225dd821.slice/crio-72d91e8f990f7d9907d20d6223557d0c3091880ee89136240bff6082e92eddf4 WatchSource:0}: Error finding container 72d91e8f990f7d9907d20d6223557d0c3091880ee89136240bff6082e92eddf4: Status 404 returned error can't find the container with id 72d91e8f990f7d9907d20d6223557d0c3091880ee89136240bff6082e92eddf4 Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.876057 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.876127 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.876176 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:53 crc kubenswrapper[4869]: E0130 21:43:53.876326 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:43:53 crc kubenswrapper[4869]: E0130 21:43:53.876420 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:43:53 crc kubenswrapper[4869]: E0130 21:43:53.876550 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.916628 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:07:44.287427412 +0000 UTC Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.923203 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.923283 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.923302 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.923324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:53 crc kubenswrapper[4869]: I0130 21:43:53.923343 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:53Z","lastTransitionTime":"2026-01-30T21:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.025179 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.025217 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.025227 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.025241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.025250 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:54Z","lastTransitionTime":"2026-01-30T21:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.086620 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" event={"ID":"e92fcbf8-b6b4-4531-a7cf-ed59225dd821","Type":"ContainerStarted","Data":"72d91e8f990f7d9907d20d6223557d0c3091880ee89136240bff6082e92eddf4"} Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.086703 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.126913 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.126971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.126984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.127005 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.127019 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:54Z","lastTransitionTime":"2026-01-30T21:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.229190 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.229253 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.229276 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.229308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.229330 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:54Z","lastTransitionTime":"2026-01-30T21:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.331972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.332006 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.332015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.332028 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.332037 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:54Z","lastTransitionTime":"2026-01-30T21:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.434289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.434350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.434368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.434386 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.434401 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:54Z","lastTransitionTime":"2026-01-30T21:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.537423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.537484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.537500 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.537522 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.537537 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:54Z","lastTransitionTime":"2026-01-30T21:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.610058 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-45w6p"] Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.610605 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:43:54 crc kubenswrapper[4869]: E0130 21:43:54.610678 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.623389 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.636093 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.639857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.639913 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.639925 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.639940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.639950 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:54Z","lastTransitionTime":"2026-01-30T21:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.653113 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.664126 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.679419 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.694994 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.705297 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf95q\" (UniqueName: \"kubernetes.io/projected/b980f4db-64d3-48c9-9ff8-18f23c4888cd-kube-api-access-hf95q\") pod \"network-metrics-daemon-45w6p\" (UID: \"b980f4db-64d3-48c9-9ff8-18f23c4888cd\") " pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.705450 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs\") pod \"network-metrics-daemon-45w6p\" (UID: \"b980f4db-64d3-48c9-9ff8-18f23c4888cd\") " pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.712433 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.731742 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f3e3e5f9a768662c4da62f383068742f43a32394db79e2a0302c99dbe53097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.746345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.746412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.746498 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.746580 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.746596 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:54Z","lastTransitionTime":"2026-01-30T21:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.750177 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.773566 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.789113 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.807107 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs\") pod \"network-metrics-daemon-45w6p\" (UID: \"b980f4db-64d3-48c9-9ff8-18f23c4888cd\") " pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.807161 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf95q\" (UniqueName: \"kubernetes.io/projected/b980f4db-64d3-48c9-9ff8-18f23c4888cd-kube-api-access-hf95q\") pod \"network-metrics-daemon-45w6p\" (UID: \"b980f4db-64d3-48c9-9ff8-18f23c4888cd\") " pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:43:54 crc kubenswrapper[4869]: E0130 21:43:54.807344 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:43:54 crc kubenswrapper[4869]: E0130 21:43:54.807452 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs podName:b980f4db-64d3-48c9-9ff8-18f23c4888cd nodeName:}" failed. No retries permitted until 2026-01-30 21:43:55.307423696 +0000 UTC m=+36.193181761 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs") pod "network-metrics-daemon-45w6p" (UID: "b980f4db-64d3-48c9-9ff8-18f23c4888cd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.809459 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.841205 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.849554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.849596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.849613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.849634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.849649 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:54Z","lastTransitionTime":"2026-01-30T21:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.851868 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf95q\" (UniqueName: \"kubernetes.io/projected/b980f4db-64d3-48c9-9ff8-18f23c4888cd-kube-api-access-hf95q\") pod \"network-metrics-daemon-45w6p\" (UID: \"b980f4db-64d3-48c9-9ff8-18f23c4888cd\") " pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.854274 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.870235 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.886584 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.916967 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:24:23.362158946 +0000 UTC Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.952027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.952307 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.952730 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.952890 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:54 crc kubenswrapper[4869]: I0130 21:43:54.953055 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:54Z","lastTransitionTime":"2026-01-30T21:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.055834 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.056002 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.056030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.056067 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.056091 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:55Z","lastTransitionTime":"2026-01-30T21:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.093958 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" event={"ID":"e92fcbf8-b6b4-4531-a7cf-ed59225dd821","Type":"ContainerStarted","Data":"7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696"} Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.094022 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" event={"ID":"e92fcbf8-b6b4-4531-a7cf-ed59225dd821","Type":"ContainerStarted","Data":"43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb"} Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.096120 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovnkube-controller/0.log" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.100476 4869 generic.go:334] "Generic (PLEG): container finished" podID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerID="c4f3e3e5f9a768662c4da62f383068742f43a32394db79e2a0302c99dbe53097" exitCode=1 Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.100655 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerDied","Data":"c4f3e3e5f9a768662c4da62f383068742f43a32394db79e2a0302c99dbe53097"} Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.101435 4869 scope.go:117] "RemoveContainer" containerID="c4f3e3e5f9a768662c4da62f383068742f43a32394db79e2a0302c99dbe53097" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.117268 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.128495 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.141051 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.157632 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.159025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.159173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.159275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.159380 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.159449 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:55Z","lastTransitionTime":"2026-01-30T21:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.180790 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.194536 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.211518 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.232338 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f3e3e5f9a768662c4da62f383068742f43a32394db79e2a0302c99dbe53097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.249257 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.262822 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.262870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.262880 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.262911 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.262922 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:55Z","lastTransitionTime":"2026-01-30T21:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.267712 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.281348 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.293302 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.305755 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.313143 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs\") pod \"network-metrics-daemon-45w6p\" (UID: \"b980f4db-64d3-48c9-9ff8-18f23c4888cd\") " pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.313359 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.313490 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs podName:b980f4db-64d3-48c9-9ff8-18f23c4888cd nodeName:}" failed. No retries permitted until 2026-01-30 21:43:56.313451173 +0000 UTC m=+37.199209248 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs") pod "network-metrics-daemon-45w6p" (UID: "b980f4db-64d3-48c9-9ff8-18f23c4888cd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.322291 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.335529 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.347500 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.359865 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.367628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.367658 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.367668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.367681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.367690 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:55Z","lastTransitionTime":"2026-01-30T21:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.373738 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.383547 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.399359 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.423564 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f3e3e5f9a768662c4da62f383068742f43a32394db79e2a0302c99dbe53097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4f3e3e5f9a768662c4da62f383068742f43a32394db79e2a0302c99dbe53097\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"8 for removal\\\\nI0130 21:43:53.791005 6188 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:43:53.791025 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:43:53.791033 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:43:53.791055 6188 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:43:53.791062 6188 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:43:53.791085 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 21:43:53.791091 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:43:53.791104 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:43:53.791085 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:43:53.791120 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:43:53.791149 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:43:53.791210 6188 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:43:53.791247 6188 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:43:53.791316 6188 factory.go:656] Stopping watch factory\\\\nI0130 21:43:53.791354 6188 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:43:53.791402 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.438705 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.451492 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.465963 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.472821 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.473006 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.473107 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.473213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.473293 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:55Z","lastTransitionTime":"2026-01-30T21:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.479198 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.490658 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.501016 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.510873 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.521444 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.538022 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.548423 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.558828 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.575588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.575629 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.575643 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.575666 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.575679 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:55Z","lastTransitionTime":"2026-01-30T21:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.616075 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.616274 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:44:11.616244669 +0000 UTC m=+52.502002694 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.677858 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.677887 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.677920 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.677941 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.677951 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:55Z","lastTransitionTime":"2026-01-30T21:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.716920 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.717005 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.717030 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.717056 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.717135 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.717210 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:44:11.717170254 +0000 UTC m=+52.602928279 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.717301 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.717355 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.717373 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.717439 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:44:11.71741767 +0000 UTC m=+52.603175765 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.717303 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.717536 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.717561 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.717312 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.717644 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:44:11.717617166 +0000 UTC m=+52.603375231 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.717687 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:44:11.717659428 +0000 UTC m=+52.603417503 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.780596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.780640 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.780648 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.780664 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.780675 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:55Z","lastTransitionTime":"2026-01-30T21:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.876699 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.876818 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.876853 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.877081 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.877163 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.877222 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.877400 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:43:55 crc kubenswrapper[4869]: E0130 21:43:55.877457 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.914640 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.914677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.914685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.914699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.914709 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:55Z","lastTransitionTime":"2026-01-30T21:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:55 crc kubenswrapper[4869]: I0130 21:43:55.917777 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 17:34:11.466662745 +0000 UTC Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.016862 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.016936 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.016949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.016968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.016981 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:56Z","lastTransitionTime":"2026-01-30T21:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.064036 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.064116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.064130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.064155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.064170 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:56Z","lastTransitionTime":"2026-01-30T21:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:56 crc kubenswrapper[4869]: E0130 21:43:56.078608 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.082016 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.082051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.082063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.082080 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.082092 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:56Z","lastTransitionTime":"2026-01-30T21:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:56 crc kubenswrapper[4869]: E0130 21:43:56.108620 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.117427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.117472 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.117484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.117504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.117515 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:56Z","lastTransitionTime":"2026-01-30T21:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.119676 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovnkube-controller/0.log" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.122126 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerStarted","Data":"53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60"} Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.122690 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:43:56 crc kubenswrapper[4869]: E0130 21:43:56.130861 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.134932 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.134975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.134986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.135001 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.135011 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:56Z","lastTransitionTime":"2026-01-30T21:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.139348 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: E0130 21:43:56.147440 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.151360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.151472 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.151650 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.151737 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.151870 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:56Z","lastTransitionTime":"2026-01-30T21:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.151593 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: E0130 21:43:56.163380 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: E0130 21:43:56.163503 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.165813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.165843 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.165853 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.165869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.165885 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:56Z","lastTransitionTime":"2026-01-30T21:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.166679 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.189915 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4f3e3e5f9a768662c4da62f383068742f43a32394db79e2a0302c99dbe53097\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"8 for removal\\\\nI0130 21:43:53.791005 6188 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:43:53.791025 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:43:53.791033 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:43:53.791055 6188 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:43:53.791062 6188 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:43:53.791085 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 21:43:53.791091 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:43:53.791104 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:43:53.791085 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:43:53.791120 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:43:53.791149 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:43:53.791210 6188 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:43:53.791247 6188 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:43:53.791316 6188 factory.go:656] Stopping watch factory\\\\nI0130 21:43:53.791354 6188 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:43:53.791402 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.201536 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.216867 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.236389 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.251079 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.266203 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.267657 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.267681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.267690 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.267704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.267713 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:56Z","lastTransitionTime":"2026-01-30T21:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.282970 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.296203 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.310172 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.321819 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs\") pod \"network-metrics-daemon-45w6p\" (UID: \"b980f4db-64d3-48c9-9ff8-18f23c4888cd\") " pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:43:56 crc kubenswrapper[4869]: E0130 21:43:56.322097 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:43:56 crc kubenswrapper[4869]: E0130 21:43:56.322253 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs podName:b980f4db-64d3-48c9-9ff8-18f23c4888cd nodeName:}" failed. No retries permitted until 2026-01-30 21:43:58.322217788 +0000 UTC m=+39.207975823 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs") pod "network-metrics-daemon-45w6p" (UID: "b980f4db-64d3-48c9-9ff8-18f23c4888cd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.332693 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.343793 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.356262 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.367231 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.370055 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.370095 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.370109 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.370131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.370146 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:56Z","lastTransitionTime":"2026-01-30T21:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.472557 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.472591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.472600 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.472615 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.472627 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:56Z","lastTransitionTime":"2026-01-30T21:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.576604 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.576656 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.576667 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.576688 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.576701 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:56Z","lastTransitionTime":"2026-01-30T21:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.680576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.680657 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.680682 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.680718 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.680744 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:56Z","lastTransitionTime":"2026-01-30T21:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.783763 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.783838 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.783857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.783888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.783967 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:56Z","lastTransitionTime":"2026-01-30T21:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.886824 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.886872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.886881 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.886914 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.886930 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:56Z","lastTransitionTime":"2026-01-30T21:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.918065 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 03:30:43.603777719 +0000 UTC Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.990263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.990339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.990366 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.990403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:56 crc kubenswrapper[4869]: I0130 21:43:56.990430 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:56Z","lastTransitionTime":"2026-01-30T21:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.094451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.094503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.094518 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.094542 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.094558 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:57Z","lastTransitionTime":"2026-01-30T21:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.130864 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovnkube-controller/1.log" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.131798 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovnkube-controller/0.log" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.138060 4869 generic.go:334] "Generic (PLEG): container finished" podID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerID="53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60" exitCode=1 Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.138110 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerDied","Data":"53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60"} Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.138162 4869 scope.go:117] "RemoveContainer" containerID="c4f3e3e5f9a768662c4da62f383068742f43a32394db79e2a0302c99dbe53097" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.140701 4869 scope.go:117] "RemoveContainer" containerID="53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60" Jan 30 21:43:57 crc kubenswrapper[4869]: E0130 21:43:57.143050 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\"" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.156823 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.177090 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.198828 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.198876 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.198952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.198978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.199036 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:57Z","lastTransitionTime":"2026-01-30T21:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.200627 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.220566 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.243322 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.262006 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.279309 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.301768 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.302391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.302468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.302488 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.302519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.302538 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:57Z","lastTransitionTime":"2026-01-30T21:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.328362 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.346031 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.368459 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.385390 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.405375 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.405430 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.405443 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.405462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.405476 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:57Z","lastTransitionTime":"2026-01-30T21:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.409878 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.439571 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4f3e3e5f9a768662c4da62f383068742f43a32394db79e2a0302c99dbe53097\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"8 for removal\\\\nI0130 21:43:53.791005 6188 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:43:53.791025 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:43:53.791033 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:43:53.791055 6188 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:43:53.791062 6188 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:43:53.791085 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 21:43:53.791091 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:43:53.791104 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:43:53.791085 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:43:53.791120 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:43:53.791149 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:43:53.791210 6188 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:43:53.791247 6188 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:43:53.791316 6188 factory.go:656] Stopping watch factory\\\\nI0130 21:43:53.791354 6188 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:43:53.791402 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"rics-daemon-45w6p\\\\nI0130 21:43:56.316255 6395 ovn.go:94] Posting a Warning event for Pod openshift-multus/network-metrics-daemon-45w6p\\\\nI0130 21:43:56.316118 6395 services_controller.go:356] Processing sync for service openshift-cluster-samples-operator/metrics for network=default\\\\nI0130 21:43:56.316307 6395 services_controller.go:360] Finished syncing service metrics on namespace openshift-cluster-samples-operator for network=default : 188.546µs\\\\nI0130 21:43:56.316294 6395 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0130 21:43:56.316355 6395 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0130 21:43:56.316373 6395 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:43:56.316416 6395 factory.go:656] Stopping watch factory\\\\nI0130 21:43:56.316446 6395 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:43:56.316450 6395 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 21:43:56.316446 6395 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0130 21:43:56.316482 6395 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:43:56.316581 6395 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.460470 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.485614 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.509126 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.509211 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.509235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.509270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.509290 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:57Z","lastTransitionTime":"2026-01-30T21:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.612420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.612490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.612516 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.612549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.612575 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:57Z","lastTransitionTime":"2026-01-30T21:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.716179 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.716283 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.716307 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.716359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.716390 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:57Z","lastTransitionTime":"2026-01-30T21:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.819866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.819960 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.819983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.820009 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.820028 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:57Z","lastTransitionTime":"2026-01-30T21:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.876041 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.876101 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.876147 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:43:57 crc kubenswrapper[4869]: E0130 21:43:57.876233 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:43:57 crc kubenswrapper[4869]: E0130 21:43:57.876393 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.876575 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:57 crc kubenswrapper[4869]: E0130 21:43:57.876893 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:43:57 crc kubenswrapper[4869]: E0130 21:43:57.877118 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.918694 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 04:54:48.377338593 +0000 UTC Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.923762 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.923839 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.923867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.923892 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:57 crc kubenswrapper[4869]: I0130 21:43:57.923938 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:57Z","lastTransitionTime":"2026-01-30T21:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.028362 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.028430 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.028455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.028494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.028519 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:58Z","lastTransitionTime":"2026-01-30T21:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.131170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.131220 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.131228 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.131256 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.131271 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:58Z","lastTransitionTime":"2026-01-30T21:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.146872 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovnkube-controller/1.log" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.234578 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.234671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.234698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.234734 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.234759 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:58Z","lastTransitionTime":"2026-01-30T21:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.338861 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.338964 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.338978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.338997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.339011 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:58Z","lastTransitionTime":"2026-01-30T21:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.347709 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs\") pod \"network-metrics-daemon-45w6p\" (UID: \"b980f4db-64d3-48c9-9ff8-18f23c4888cd\") " pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:43:58 crc kubenswrapper[4869]: E0130 21:43:58.347969 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:43:58 crc kubenswrapper[4869]: E0130 21:43:58.348088 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs podName:b980f4db-64d3-48c9-9ff8-18f23c4888cd nodeName:}" failed. No retries permitted until 2026-01-30 21:44:02.348055579 +0000 UTC m=+43.233813654 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs") pod "network-metrics-daemon-45w6p" (UID: "b980f4db-64d3-48c9-9ff8-18f23c4888cd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.441695 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.441765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.441783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.441814 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.441836 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:58Z","lastTransitionTime":"2026-01-30T21:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.545372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.545424 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.545438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.545462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.545477 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:58Z","lastTransitionTime":"2026-01-30T21:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.648827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.648931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.648959 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.648991 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.649014 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:58Z","lastTransitionTime":"2026-01-30T21:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.752803 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.752886 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.752931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.752955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.752974 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:58Z","lastTransitionTime":"2026-01-30T21:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.856441 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.856521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.856544 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.856584 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.856606 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:58Z","lastTransitionTime":"2026-01-30T21:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.876946 4869 scope.go:117] "RemoveContainer" containerID="75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.919034 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:16:50.977470076 +0000 UTC Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.959647 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.959692 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.959703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.959720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:58 crc kubenswrapper[4869]: I0130 21:43:58.959733 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:58Z","lastTransitionTime":"2026-01-30T21:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.063176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.063505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.063518 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.063534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.063557 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:59Z","lastTransitionTime":"2026-01-30T21:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.160970 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.162281 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076"} Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.162603 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.165906 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.165942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.165954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.165970 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.165982 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:59Z","lastTransitionTime":"2026-01-30T21:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.182337 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.200092 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.213325 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.228108 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.245839 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.263420 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.268553 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.268605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.268627 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.268649 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.268669 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:59Z","lastTransitionTime":"2026-01-30T21:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.289023 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4f3e3e5f9a768662c4da62f383068742f43a32394db79e2a0302c99dbe53097\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"8 for removal\\\\nI0130 21:43:53.791005 6188 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:43:53.791025 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:43:53.791033 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:43:53.791055 6188 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:43:53.791062 6188 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:43:53.791085 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 21:43:53.791091 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:43:53.791104 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:43:53.791085 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:43:53.791120 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:43:53.791149 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:43:53.791210 6188 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:43:53.791247 6188 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:43:53.791316 6188 factory.go:656] Stopping watch factory\\\\nI0130 21:43:53.791354 6188 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:43:53.791402 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"rics-daemon-45w6p\\\\nI0130 21:43:56.316255 6395 ovn.go:94] Posting a Warning event for Pod openshift-multus/network-metrics-daemon-45w6p\\\\nI0130 21:43:56.316118 6395 services_controller.go:356] Processing sync for service openshift-cluster-samples-operator/metrics for network=default\\\\nI0130 21:43:56.316307 6395 services_controller.go:360] Finished syncing service metrics on namespace openshift-cluster-samples-operator for network=default : 188.546µs\\\\nI0130 21:43:56.316294 6395 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0130 21:43:56.316355 6395 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0130 21:43:56.316373 6395 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:43:56.316416 6395 factory.go:656] Stopping watch factory\\\\nI0130 21:43:56.316446 6395 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:43:56.316450 6395 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 21:43:56.316446 6395 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0130 21:43:56.316482 6395 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:43:56.316581 6395 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.309255 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.333938 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.352150 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.367951 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.370663 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.370815 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.370932 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.371072 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.371219 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:59Z","lastTransitionTime":"2026-01-30T21:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.387435 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.400450 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.420213 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.438019 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.453217 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.474387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.474420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.474429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.474444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.474454 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:59Z","lastTransitionTime":"2026-01-30T21:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.581137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.581552 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.581578 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.581596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.581726 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:59Z","lastTransitionTime":"2026-01-30T21:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.684918 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.684950 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.684958 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.684973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.684983 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:59Z","lastTransitionTime":"2026-01-30T21:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.787371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.787422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.787433 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.787463 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.787476 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:59Z","lastTransitionTime":"2026-01-30T21:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.876646 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.876647 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.877279 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.877438 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:43:59 crc kubenswrapper[4869]: E0130 21:43:59.877430 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:43:59 crc kubenswrapper[4869]: E0130 21:43:59.877660 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:43:59 crc kubenswrapper[4869]: E0130 21:43:59.877972 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:43:59 crc kubenswrapper[4869]: E0130 21:43:59.877876 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.889979 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.890047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.890067 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.890094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.890114 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:59Z","lastTransitionTime":"2026-01-30T21:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.892990 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.910604 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.919776 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 06:51:48.753891403 +0000 UTC Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.921056 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.932716 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.944609 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.964881 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4f3e3e5f9a768662c4da62f383068742f43a32394db79e2a0302c99dbe53097\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"8 for removal\\\\nI0130 21:43:53.791005 6188 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:43:53.791025 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:43:53.791033 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:43:53.791055 6188 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:43:53.791062 6188 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:43:53.791085 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 21:43:53.791091 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:43:53.791104 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:43:53.791085 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:43:53.791120 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:43:53.791149 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:43:53.791210 6188 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:43:53.791247 6188 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:43:53.791316 6188 factory.go:656] Stopping watch factory\\\\nI0130 21:43:53.791354 6188 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:43:53.791402 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"rics-daemon-45w6p\\\\nI0130 21:43:56.316255 6395 ovn.go:94] Posting a Warning event for Pod openshift-multus/network-metrics-daemon-45w6p\\\\nI0130 21:43:56.316118 6395 services_controller.go:356] Processing sync for service openshift-cluster-samples-operator/metrics for network=default\\\\nI0130 21:43:56.316307 6395 services_controller.go:360] Finished syncing service metrics on namespace openshift-cluster-samples-operator for network=default : 188.546µs\\\\nI0130 21:43:56.316294 6395 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0130 21:43:56.316355 6395 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0130 21:43:56.316373 6395 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:43:56.316416 6395 factory.go:656] Stopping watch factory\\\\nI0130 21:43:56.316446 6395 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:43:56.316450 6395 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 21:43:56.316446 6395 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0130 21:43:56.316482 6395 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:43:56.316581 6395 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.975906 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.993062 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:43:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.993740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.993836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.993913 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.993990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:43:59 crc kubenswrapper[4869]: I0130 21:43:59.994059 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:43:59Z","lastTransitionTime":"2026-01-30T21:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.004627 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.016134 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.030091 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.052283 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.072190 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.094301 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.097872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.097993 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.098018 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.098050 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.098081 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:00Z","lastTransitionTime":"2026-01-30T21:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.120664 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.139245 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.200851 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.200942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.200956 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.200982 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.201002 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:00Z","lastTransitionTime":"2026-01-30T21:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.304214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.304262 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.304279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.304304 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.304320 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:00Z","lastTransitionTime":"2026-01-30T21:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.407769 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.408715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.408827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.408948 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.409032 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:00Z","lastTransitionTime":"2026-01-30T21:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.513721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.513807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.513835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.513881 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.513954 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:00Z","lastTransitionTime":"2026-01-30T21:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.618468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.618581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.618602 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.618633 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.618656 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:00Z","lastTransitionTime":"2026-01-30T21:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.722733 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.722790 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.722805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.722824 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.722836 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:00Z","lastTransitionTime":"2026-01-30T21:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.826884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.826959 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.826972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.826993 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.827008 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:00Z","lastTransitionTime":"2026-01-30T21:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.920177 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:23:44.343420842 +0000 UTC Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.930978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.931044 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.931065 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.931100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:00 crc kubenswrapper[4869]: I0130 21:44:00.931124 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:00Z","lastTransitionTime":"2026-01-30T21:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.034538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.034611 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.034634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.034665 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.034687 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:01Z","lastTransitionTime":"2026-01-30T21:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.137930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.138009 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.138030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.138056 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.138074 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:01Z","lastTransitionTime":"2026-01-30T21:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.242128 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.242220 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.242246 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.242282 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.242309 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:01Z","lastTransitionTime":"2026-01-30T21:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.345144 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.345189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.345200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.345215 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.345225 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:01Z","lastTransitionTime":"2026-01-30T21:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.448666 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.448724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.448733 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.448749 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.448759 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:01Z","lastTransitionTime":"2026-01-30T21:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.551801 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.551845 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.551854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.551867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.551876 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:01Z","lastTransitionTime":"2026-01-30T21:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.655110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.655170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.655199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.655224 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.655239 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:01Z","lastTransitionTime":"2026-01-30T21:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.757696 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.757762 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.757786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.757810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.757828 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:01Z","lastTransitionTime":"2026-01-30T21:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.860956 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.861022 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.861035 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.861055 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.861074 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:01Z","lastTransitionTime":"2026-01-30T21:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.876471 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.876577 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:01 crc kubenswrapper[4869]: E0130 21:44:01.876612 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.876587 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:01 crc kubenswrapper[4869]: E0130 21:44:01.876808 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.876862 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:01 crc kubenswrapper[4869]: E0130 21:44:01.877180 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:01 crc kubenswrapper[4869]: E0130 21:44:01.877305 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.921163 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 07:12:11.665304065 +0000 UTC Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.964566 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.964634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.964659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.964693 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:01 crc kubenswrapper[4869]: I0130 21:44:01.964722 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:01Z","lastTransitionTime":"2026-01-30T21:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.068192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.068301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.068327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.068355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.068373 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:02Z","lastTransitionTime":"2026-01-30T21:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.171419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.171488 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.171549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.171576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.171597 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:02Z","lastTransitionTime":"2026-01-30T21:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.274792 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.274864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.274884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.274945 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.274965 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:02Z","lastTransitionTime":"2026-01-30T21:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.379014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.379093 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.379114 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.379142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.379163 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:02Z","lastTransitionTime":"2026-01-30T21:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.401497 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs\") pod \"network-metrics-daemon-45w6p\" (UID: \"b980f4db-64d3-48c9-9ff8-18f23c4888cd\") " pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:02 crc kubenswrapper[4869]: E0130 21:44:02.401700 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:44:02 crc kubenswrapper[4869]: E0130 21:44:02.401798 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs podName:b980f4db-64d3-48c9-9ff8-18f23c4888cd nodeName:}" failed. No retries permitted until 2026-01-30 21:44:10.401763931 +0000 UTC m=+51.287521996 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs") pod "network-metrics-daemon-45w6p" (UID: "b980f4db-64d3-48c9-9ff8-18f23c4888cd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.483029 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.483133 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.483165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.483203 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.483244 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:02Z","lastTransitionTime":"2026-01-30T21:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.587555 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.587637 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.587659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.587687 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.587708 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:02Z","lastTransitionTime":"2026-01-30T21:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.691523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.691594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.691606 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.691626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.691638 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:02Z","lastTransitionTime":"2026-01-30T21:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.795074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.795151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.795174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.795203 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.795223 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:02Z","lastTransitionTime":"2026-01-30T21:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.900246 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.900335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.900357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.900388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.900408 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:02Z","lastTransitionTime":"2026-01-30T21:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:02 crc kubenswrapper[4869]: I0130 21:44:02.922001 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 04:43:22.703275899 +0000 UTC Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.004926 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.005004 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.005028 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.005062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.005085 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:03Z","lastTransitionTime":"2026-01-30T21:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.109331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.109433 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.109504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.109543 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.109574 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:03Z","lastTransitionTime":"2026-01-30T21:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.213313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.213802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.214112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.214326 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.214497 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:03Z","lastTransitionTime":"2026-01-30T21:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.317551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.317694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.317714 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.317740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.317760 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:03Z","lastTransitionTime":"2026-01-30T21:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.421361 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.421757 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.421968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.422137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.422262 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:03Z","lastTransitionTime":"2026-01-30T21:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.526207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.527439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.527633 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.527827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.528057 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:03Z","lastTransitionTime":"2026-01-30T21:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.636098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.636639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.636817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.637059 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.637231 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:03Z","lastTransitionTime":"2026-01-30T21:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.740946 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.741018 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.741036 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.741065 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.741084 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:03Z","lastTransitionTime":"2026-01-30T21:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.844868 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.844956 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.844971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.845002 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.845017 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:03Z","lastTransitionTime":"2026-01-30T21:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.876533 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.876533 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.876668 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:03 crc kubenswrapper[4869]: E0130 21:44:03.876931 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.877003 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:03 crc kubenswrapper[4869]: E0130 21:44:03.877310 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:03 crc kubenswrapper[4869]: E0130 21:44:03.877561 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:03 crc kubenswrapper[4869]: E0130 21:44:03.877739 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.922638 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 06:35:06.797746942 +0000 UTC Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.948450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.948515 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.948534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.948575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:03 crc kubenswrapper[4869]: I0130 21:44:03.948597 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:03Z","lastTransitionTime":"2026-01-30T21:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.051756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.052044 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.052116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.052183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.052247 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:04Z","lastTransitionTime":"2026-01-30T21:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.155692 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.155998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.156070 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.156135 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.156189 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:04Z","lastTransitionTime":"2026-01-30T21:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.259108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.259194 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.259220 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.259252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.259274 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:04Z","lastTransitionTime":"2026-01-30T21:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.362973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.363231 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.363292 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.363355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.363439 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:04Z","lastTransitionTime":"2026-01-30T21:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.466965 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.467046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.467068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.467097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.467123 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:04Z","lastTransitionTime":"2026-01-30T21:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.569802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.570356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.570457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.570574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.570635 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:04Z","lastTransitionTime":"2026-01-30T21:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.673412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.673570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.673592 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.673619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.673638 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:04Z","lastTransitionTime":"2026-01-30T21:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.777228 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.777304 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.777325 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.777353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.777373 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:04Z","lastTransitionTime":"2026-01-30T21:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.880773 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.881242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.881431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.881626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.881813 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:04Z","lastTransitionTime":"2026-01-30T21:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.923091 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:40:19.229641487 +0000 UTC Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.985029 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.985093 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.985112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.985136 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:04 crc kubenswrapper[4869]: I0130 21:44:04.985149 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:04Z","lastTransitionTime":"2026-01-30T21:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.086240 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.088023 4869 scope.go:117] "RemoveContainer" containerID="53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60" Jan 30 21:44:05 crc kubenswrapper[4869]: E0130 21:44:05.088407 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\"" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.089065 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.089138 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.089168 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.089198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.089220 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:05Z","lastTransitionTime":"2026-01-30T21:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.109752 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.128486 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.149454 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.171592 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.189077 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.191603 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.191640 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.191652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.191671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.191685 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:05Z","lastTransitionTime":"2026-01-30T21:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.204811 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.221544 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.245712 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"rics-daemon-45w6p\\\\nI0130 21:43:56.316255 6395 ovn.go:94] Posting a Warning event for Pod openshift-multus/network-metrics-daemon-45w6p\\\\nI0130 21:43:56.316118 6395 services_controller.go:356] Processing sync for service openshift-cluster-samples-operator/metrics for network=default\\\\nI0130 21:43:56.316307 6395 services_controller.go:360] Finished syncing service metrics on namespace openshift-cluster-samples-operator for network=default : 188.546µs\\\\nI0130 21:43:56.316294 6395 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0130 21:43:56.316355 6395 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0130 21:43:56.316373 6395 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:43:56.316416 6395 factory.go:656] Stopping watch factory\\\\nI0130 21:43:56.316446 6395 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:43:56.316450 6395 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 21:43:56.316446 6395 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0130 21:43:56.316482 6395 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:43:56.316581 6395 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.261593 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.276604 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.292421 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.293973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.294074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.294141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.294207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.294376 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:05Z","lastTransitionTime":"2026-01-30T21:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.304777 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.321382 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.340427 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.352358 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.367225 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.397263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.397350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.397370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.397400 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.397420 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:05Z","lastTransitionTime":"2026-01-30T21:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.500411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.500715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.500801 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.500933 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.501034 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:05Z","lastTransitionTime":"2026-01-30T21:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.604193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.604241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.604257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.604349 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.604364 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:05Z","lastTransitionTime":"2026-01-30T21:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.706864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.707167 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.707269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.707365 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.707487 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:05Z","lastTransitionTime":"2026-01-30T21:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.810199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.810284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.810311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.810349 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.810375 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:05Z","lastTransitionTime":"2026-01-30T21:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.876115 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.876215 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.876158 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:05 crc kubenswrapper[4869]: E0130 21:44:05.883350 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.883471 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:05 crc kubenswrapper[4869]: E0130 21:44:05.884009 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:05 crc kubenswrapper[4869]: E0130 21:44:05.884513 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:05 crc kubenswrapper[4869]: E0130 21:44:05.884685 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.913159 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.913205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.913220 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.913236 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.913247 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:05Z","lastTransitionTime":"2026-01-30T21:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:05 crc kubenswrapper[4869]: I0130 21:44:05.923729 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 00:12:17.879056593 +0000 UTC Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.016810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.016858 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.016872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.016892 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.016927 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:06Z","lastTransitionTime":"2026-01-30T21:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.119717 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.119822 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.119836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.119851 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.119862 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:06Z","lastTransitionTime":"2026-01-30T21:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.223080 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.223158 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.223180 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.223211 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.223231 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:06Z","lastTransitionTime":"2026-01-30T21:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.326745 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.326793 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.326802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.326819 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.326830 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:06Z","lastTransitionTime":"2026-01-30T21:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.430365 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.430441 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.430460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.430488 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.430506 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:06Z","lastTransitionTime":"2026-01-30T21:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.435795 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.435841 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.435855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.435877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.435889 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:06Z","lastTransitionTime":"2026-01-30T21:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:06 crc kubenswrapper[4869]: E0130 21:44:06.456111 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.462050 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.462130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.462155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.462187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.462210 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:06Z","lastTransitionTime":"2026-01-30T21:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:06 crc kubenswrapper[4869]: E0130 21:44:06.483967 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.489928 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.489984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.490000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.490023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.490038 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:06Z","lastTransitionTime":"2026-01-30T21:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:06 crc kubenswrapper[4869]: E0130 21:44:06.511735 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.517646 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.517719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.517745 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.517778 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.517805 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:06Z","lastTransitionTime":"2026-01-30T21:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:06 crc kubenswrapper[4869]: E0130 21:44:06.533068 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.536737 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.536772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.536786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.536805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.536816 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:06Z","lastTransitionTime":"2026-01-30T21:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:06 crc kubenswrapper[4869]: E0130 21:44:06.554082 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:06 crc kubenswrapper[4869]: E0130 21:44:06.554242 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.556593 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.556645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.556662 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.556685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.556701 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:06Z","lastTransitionTime":"2026-01-30T21:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.659436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.660452 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.660592 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.660753 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.660876 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:06Z","lastTransitionTime":"2026-01-30T21:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.764705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.764786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.764804 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.764826 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.764842 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:06Z","lastTransitionTime":"2026-01-30T21:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.869137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.869195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.869214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.869241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.869263 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:06Z","lastTransitionTime":"2026-01-30T21:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.924998 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 07:45:39.518853723 +0000 UTC Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.972882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.973004 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.973029 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.973245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:06 crc kubenswrapper[4869]: I0130 21:44:06.973268 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:06Z","lastTransitionTime":"2026-01-30T21:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.078344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.078418 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.078438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.078468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.078494 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:07Z","lastTransitionTime":"2026-01-30T21:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.181707 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.181783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.181803 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.181829 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.181849 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:07Z","lastTransitionTime":"2026-01-30T21:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.285252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.285332 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.285350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.285406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.285425 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:07Z","lastTransitionTime":"2026-01-30T21:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.388818 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.388926 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.388955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.388986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.389010 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:07Z","lastTransitionTime":"2026-01-30T21:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.492079 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.492131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.492182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.492205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.492220 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:07Z","lastTransitionTime":"2026-01-30T21:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.596406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.596675 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.596741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.596806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.596932 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:07Z","lastTransitionTime":"2026-01-30T21:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.700061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.700177 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.700198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.700225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.700245 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:07Z","lastTransitionTime":"2026-01-30T21:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.804045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.804134 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.804157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.804218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.804247 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:07Z","lastTransitionTime":"2026-01-30T21:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.877041 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.877127 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.877065 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.877096 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:07 crc kubenswrapper[4869]: E0130 21:44:07.877324 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:07 crc kubenswrapper[4869]: E0130 21:44:07.877459 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:07 crc kubenswrapper[4869]: E0130 21:44:07.877562 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:07 crc kubenswrapper[4869]: E0130 21:44:07.877686 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.907396 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.907493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.907506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.907522 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.907534 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:07Z","lastTransitionTime":"2026-01-30T21:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:07 crc kubenswrapper[4869]: I0130 21:44:07.925978 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 21:03:35.180126377 +0000 UTC Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.010764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.010860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.010886 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.011002 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.011032 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:08Z","lastTransitionTime":"2026-01-30T21:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.113783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.113825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.113834 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.113849 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.113860 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:08Z","lastTransitionTime":"2026-01-30T21:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.216316 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.216353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.216363 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.216377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.216387 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:08Z","lastTransitionTime":"2026-01-30T21:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.320021 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.320081 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.320099 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.320123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.320143 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:08Z","lastTransitionTime":"2026-01-30T21:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.423415 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.423466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.423482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.423502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.423519 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:08Z","lastTransitionTime":"2026-01-30T21:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.527047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.527122 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.527142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.527187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.527216 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:08Z","lastTransitionTime":"2026-01-30T21:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.630857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.630984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.631058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.631095 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.631135 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:08Z","lastTransitionTime":"2026-01-30T21:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.734492 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.734684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.734764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.734849 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.734892 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:08Z","lastTransitionTime":"2026-01-30T21:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.837615 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.837668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.837682 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.837781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.837795 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:08Z","lastTransitionTime":"2026-01-30T21:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.926391 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:59:09.889529922 +0000 UTC Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.940557 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.940628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.940661 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.940689 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:08 crc kubenswrapper[4869]: I0130 21:44:08.940706 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:08Z","lastTransitionTime":"2026-01-30T21:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.044071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.044177 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.044196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.044223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.044246 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:09Z","lastTransitionTime":"2026-01-30T21:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.147393 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.147476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.147497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.147525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.147546 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:09Z","lastTransitionTime":"2026-01-30T21:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.251421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.251490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.251511 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.251537 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.251559 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:09Z","lastTransitionTime":"2026-01-30T21:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.355323 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.355429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.355458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.355492 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.355515 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:09Z","lastTransitionTime":"2026-01-30T21:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.459231 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.459314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.459329 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.459356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.459373 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:09Z","lastTransitionTime":"2026-01-30T21:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.563292 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.563392 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.563417 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.563450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.563488 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:09Z","lastTransitionTime":"2026-01-30T21:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.667252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.667312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.667329 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.667355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.667374 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:09Z","lastTransitionTime":"2026-01-30T21:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.771584 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.771680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.771706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.771735 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.771754 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:09Z","lastTransitionTime":"2026-01-30T21:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.875664 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.875707 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.875726 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.875750 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.875767 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:09Z","lastTransitionTime":"2026-01-30T21:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.885957 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.886077 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.886135 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.886251 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:09 crc kubenswrapper[4869]: E0130 21:44:09.886265 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:09 crc kubenswrapper[4869]: E0130 21:44:09.886421 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:09 crc kubenswrapper[4869]: E0130 21:44:09.886576 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:09 crc kubenswrapper[4869]: E0130 21:44:09.886866 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.909305 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:09Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.926724 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 04:05:40.359952646 +0000 UTC Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.935660 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:09Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.956726 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:09Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.977117 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:09Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.978734 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.978766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.978779 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.978828 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:09 crc kubenswrapper[4869]: I0130 21:44:09.978846 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:09Z","lastTransitionTime":"2026-01-30T21:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.000387 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:09Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.035190 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"rics-daemon-45w6p\\\\nI0130 21:43:56.316255 6395 ovn.go:94] Posting a Warning event for Pod openshift-multus/network-metrics-daemon-45w6p\\\\nI0130 21:43:56.316118 6395 services_controller.go:356] Processing sync for service openshift-cluster-samples-operator/metrics for network=default\\\\nI0130 21:43:56.316307 6395 services_controller.go:360] Finished syncing service metrics on namespace openshift-cluster-samples-operator for network=default : 188.546µs\\\\nI0130 21:43:56.316294 6395 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0130 21:43:56.316355 6395 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0130 21:43:56.316373 6395 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:43:56.316416 6395 factory.go:656] Stopping watch factory\\\\nI0130 21:43:56.316446 6395 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:43:56.316450 6395 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 21:43:56.316446 6395 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0130 21:43:56.316482 6395 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:43:56.316581 6395 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.051626 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.073628 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.085686 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.085746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.085760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.085780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.085796 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:10Z","lastTransitionTime":"2026-01-30T21:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.090591 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.108046 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.129185 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.146049 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.162946 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.180784 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.188213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.188284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.188303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.188331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.188348 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:10Z","lastTransitionTime":"2026-01-30T21:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.196753 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.211711 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:10Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.291688 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.291747 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.291760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.291782 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.291797 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:10Z","lastTransitionTime":"2026-01-30T21:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.394987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.395063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.395084 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.395115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.395136 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:10Z","lastTransitionTime":"2026-01-30T21:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.408624 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs\") pod \"network-metrics-daemon-45w6p\" (UID: \"b980f4db-64d3-48c9-9ff8-18f23c4888cd\") " pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:10 crc kubenswrapper[4869]: E0130 21:44:10.408815 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:44:10 crc kubenswrapper[4869]: E0130 21:44:10.408963 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs podName:b980f4db-64d3-48c9-9ff8-18f23c4888cd nodeName:}" failed. No retries permitted until 2026-01-30 21:44:26.408890109 +0000 UTC m=+67.294648164 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs") pod "network-metrics-daemon-45w6p" (UID: "b980f4db-64d3-48c9-9ff8-18f23c4888cd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.498377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.498445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.498467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.498494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.498514 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:10Z","lastTransitionTime":"2026-01-30T21:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.602020 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.602104 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.602130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.602164 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.602190 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:10Z","lastTransitionTime":"2026-01-30T21:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.706379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.706442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.706459 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.706489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.706508 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:10Z","lastTransitionTime":"2026-01-30T21:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.809828 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.809885 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.809977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.809997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.810010 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:10Z","lastTransitionTime":"2026-01-30T21:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.912753 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.912804 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.912818 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.912837 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.912849 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:10Z","lastTransitionTime":"2026-01-30T21:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:10 crc kubenswrapper[4869]: I0130 21:44:10.927438 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:14:11.7952318 +0000 UTC Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.015311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.015377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.015397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.015422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.015440 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:11Z","lastTransitionTime":"2026-01-30T21:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.083295 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.096935 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.102842 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:11Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.118429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.118592 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.118654 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.118748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.118816 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:11Z","lastTransitionTime":"2026-01-30T21:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.122429 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:11Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.141073 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:11Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.154631 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:11Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.181705 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"rics-daemon-45w6p\\\\nI0130 21:43:56.316255 6395 ovn.go:94] Posting a Warning event for Pod openshift-multus/network-metrics-daemon-45w6p\\\\nI0130 21:43:56.316118 6395 services_controller.go:356] Processing sync for service openshift-cluster-samples-operator/metrics for network=default\\\\nI0130 21:43:56.316307 6395 services_controller.go:360] Finished syncing service metrics on namespace openshift-cluster-samples-operator for network=default : 188.546µs\\\\nI0130 21:43:56.316294 6395 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0130 21:43:56.316355 6395 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0130 21:43:56.316373 6395 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:43:56.316416 6395 factory.go:656] Stopping watch factory\\\\nI0130 21:43:56.316446 6395 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:43:56.316450 6395 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 21:43:56.316446 6395 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0130 21:43:56.316482 6395 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:43:56.316581 6395 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:11Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.195400 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:11Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.217734 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:11Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.221096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.221162 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.221175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.221191 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.221206 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:11Z","lastTransitionTime":"2026-01-30T21:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.238341 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:11Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.252125 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:11Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.268199 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:11Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.290978 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:11Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.310332 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:11Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.324927 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.325169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.325261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.325366 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.325472 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:11Z","lastTransitionTime":"2026-01-30T21:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.331837 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:11Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.349047 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:11Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.363524 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:11Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.379501 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:11Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.428814 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.428948 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.428976 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.429002 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.429019 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:11Z","lastTransitionTime":"2026-01-30T21:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.532505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.533228 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.533479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.533673 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.533874 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:11Z","lastTransitionTime":"2026-01-30T21:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.623155 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:44:11 crc kubenswrapper[4869]: E0130 21:44:11.623635 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:44:43.623600868 +0000 UTC m=+84.509358893 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.637990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.638201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.638269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.638339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.638468 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:11Z","lastTransitionTime":"2026-01-30T21:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.724603 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.724687 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.724745 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.724811 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:11 crc kubenswrapper[4869]: E0130 21:44:11.725065 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:44:11 crc kubenswrapper[4869]: E0130 21:44:11.725116 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:44:11 crc kubenswrapper[4869]: E0130 21:44:11.725146 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:44:11 crc kubenswrapper[4869]: E0130 21:44:11.725240 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:44:43.725215064 +0000 UTC m=+84.610973129 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:44:11 crc kubenswrapper[4869]: E0130 21:44:11.725239 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:44:11 crc kubenswrapper[4869]: E0130 21:44:11.725303 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:44:43.725290597 +0000 UTC m=+84.611048652 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:44:11 crc kubenswrapper[4869]: E0130 21:44:11.725320 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:44:11 crc kubenswrapper[4869]: E0130 21:44:11.725415 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:44:11 crc kubenswrapper[4869]: E0130 21:44:11.725434 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:44:11 crc kubenswrapper[4869]: E0130 21:44:11.725540 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:44:43.725511122 +0000 UTC m=+84.611269147 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:44:11 crc kubenswrapper[4869]: E0130 21:44:11.725546 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:44:11 crc kubenswrapper[4869]: E0130 21:44:11.725757 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:44:43.725730549 +0000 UTC m=+84.611488564 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.742068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.742139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.742166 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.742202 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.742229 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:11Z","lastTransitionTime":"2026-01-30T21:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.845370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.845428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.845450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.845477 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.845496 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:11Z","lastTransitionTime":"2026-01-30T21:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.876133 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.876226 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.876366 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:11 crc kubenswrapper[4869]: E0130 21:44:11.876355 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.876133 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:11 crc kubenswrapper[4869]: E0130 21:44:11.876665 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:11 crc kubenswrapper[4869]: E0130 21:44:11.876866 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:11 crc kubenswrapper[4869]: E0130 21:44:11.876998 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.928467 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:02:38.555708213 +0000 UTC Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.948834 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.948871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.948883 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.948920 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:11 crc kubenswrapper[4869]: I0130 21:44:11.948934 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:11Z","lastTransitionTime":"2026-01-30T21:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.053140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.053464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.053637 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.053783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.053986 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:12Z","lastTransitionTime":"2026-01-30T21:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.131599 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.154245 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.158566 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.158659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.158723 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.158755 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.158775 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:12Z","lastTransitionTime":"2026-01-30T21:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.182298 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.200291 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.219081 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.240024 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.262315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.262719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.262868 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.263017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.263140 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:12Z","lastTransitionTime":"2026-01-30T21:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.263703 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.296989 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"rics-daemon-45w6p\\\\nI0130 21:43:56.316255 6395 ovn.go:94] Posting a Warning event for Pod openshift-multus/network-metrics-daemon-45w6p\\\\nI0130 21:43:56.316118 6395 services_controller.go:356] Processing sync for service openshift-cluster-samples-operator/metrics for network=default\\\\nI0130 21:43:56.316307 6395 services_controller.go:360] Finished syncing service metrics on namespace openshift-cluster-samples-operator for network=default : 188.546µs\\\\nI0130 21:43:56.316294 6395 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0130 21:43:56.316355 6395 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0130 21:43:56.316373 6395 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:43:56.316416 6395 factory.go:656] Stopping watch factory\\\\nI0130 21:43:56.316446 6395 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:43:56.316450 6395 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 21:43:56.316446 6395 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0130 21:43:56.316482 6395 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:43:56.316581 6395 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.315890 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.339032 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.358617 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.367033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.367204 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.367386 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.367560 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.367719 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:12Z","lastTransitionTime":"2026-01-30T21:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.379990 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.402197 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.426278 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.446139 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.465788 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f947b452-4e15-43ce-a4f4-3ca13d10f8d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754edbe89075ed676689e09cc4aceaa7e45560a1db966e12c22a96be3bd3eba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68975f2095b7dcd1eaf59dd358c4ac1b731116418a8991290d4ee2d66611f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://568c346025fcd6f3b511f60206c6836f445c3e1be15135c064fed0507d20aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.471589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.471829 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.472067 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.472259 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.472438 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:12Z","lastTransitionTime":"2026-01-30T21:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.494933 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.536976 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.575407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.575463 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.575476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.575498 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.575511 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:12Z","lastTransitionTime":"2026-01-30T21:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.679025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.679106 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.679127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.679157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.679178 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:12Z","lastTransitionTime":"2026-01-30T21:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.782226 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.782288 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.782300 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.782323 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.782346 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:12Z","lastTransitionTime":"2026-01-30T21:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.886101 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.886157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.886167 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.886186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.886196 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:12Z","lastTransitionTime":"2026-01-30T21:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.929129 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 21:08:36.777288845 +0000 UTC Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.989006 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.989045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.989053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.989069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:12 crc kubenswrapper[4869]: I0130 21:44:12.989079 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:12Z","lastTransitionTime":"2026-01-30T21:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.093975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.094026 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.094035 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.094051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.094064 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:13Z","lastTransitionTime":"2026-01-30T21:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.198598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.198692 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.198717 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.198752 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.198777 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:13Z","lastTransitionTime":"2026-01-30T21:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.301819 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.301969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.302022 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.302055 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.302074 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:13Z","lastTransitionTime":"2026-01-30T21:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.405854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.405962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.405981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.406014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.406035 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:13Z","lastTransitionTime":"2026-01-30T21:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.510130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.510184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.510205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.510233 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.510251 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:13Z","lastTransitionTime":"2026-01-30T21:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.613617 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.613661 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.613670 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.613684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.613693 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:13Z","lastTransitionTime":"2026-01-30T21:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.717336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.717384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.717397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.717421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.717433 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:13Z","lastTransitionTime":"2026-01-30T21:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.821121 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.821176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.821189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.821210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.821224 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:13Z","lastTransitionTime":"2026-01-30T21:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.876048 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.876112 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.876149 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.876048 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:13 crc kubenswrapper[4869]: E0130 21:44:13.876319 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:13 crc kubenswrapper[4869]: E0130 21:44:13.876447 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:13 crc kubenswrapper[4869]: E0130 21:44:13.876658 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:13 crc kubenswrapper[4869]: E0130 21:44:13.876955 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.924208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.924321 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.924341 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.924368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.924389 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:13Z","lastTransitionTime":"2026-01-30T21:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:13 crc kubenswrapper[4869]: I0130 21:44:13.929538 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 10:19:53.969318368 +0000 UTC Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.027592 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.027682 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.027711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.027787 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.027815 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:14Z","lastTransitionTime":"2026-01-30T21:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.130936 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.131029 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.131053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.131083 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.131108 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:14Z","lastTransitionTime":"2026-01-30T21:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.234514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.234589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.234615 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.234650 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.234674 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:14Z","lastTransitionTime":"2026-01-30T21:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.338542 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.338617 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.338639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.338668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.338692 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:14Z","lastTransitionTime":"2026-01-30T21:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.441864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.441954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.441968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.441989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.442003 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:14Z","lastTransitionTime":"2026-01-30T21:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.545192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.545252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.545266 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.545292 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.545312 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:14Z","lastTransitionTime":"2026-01-30T21:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.648184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.648254 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.648267 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.648285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.648298 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:14Z","lastTransitionTime":"2026-01-30T21:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.752072 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.752212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.752237 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.752270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.752291 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:14Z","lastTransitionTime":"2026-01-30T21:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.854674 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.854783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.854811 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.854846 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.854871 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:14Z","lastTransitionTime":"2026-01-30T21:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.930564 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 06:22:47.977385281 +0000 UTC Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.958319 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.958372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.958385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.958404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:14 crc kubenswrapper[4869]: I0130 21:44:14.958418 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:14Z","lastTransitionTime":"2026-01-30T21:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.062044 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.062118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.062142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.062175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.062195 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:15Z","lastTransitionTime":"2026-01-30T21:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.165579 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.165659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.165679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.165708 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.165729 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:15Z","lastTransitionTime":"2026-01-30T21:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.268971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.269044 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.269064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.269095 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.269118 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:15Z","lastTransitionTime":"2026-01-30T21:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.372890 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.373025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.373046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.373073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.373091 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:15Z","lastTransitionTime":"2026-01-30T21:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.476468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.476595 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.476619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.476649 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.476677 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:15Z","lastTransitionTime":"2026-01-30T21:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.580778 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.580844 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.580858 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.580883 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.580921 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:15Z","lastTransitionTime":"2026-01-30T21:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.683632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.683684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.683695 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.683715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.683725 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:15Z","lastTransitionTime":"2026-01-30T21:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.787157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.787218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.787237 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.787263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.787282 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:15Z","lastTransitionTime":"2026-01-30T21:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.876959 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:15 crc kubenswrapper[4869]: E0130 21:44:15.877095 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.877096 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.877220 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:15 crc kubenswrapper[4869]: E0130 21:44:15.877376 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.877486 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:15 crc kubenswrapper[4869]: E0130 21:44:15.877698 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:15 crc kubenswrapper[4869]: E0130 21:44:15.877787 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.890359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.890402 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.890410 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.890425 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.890435 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:15Z","lastTransitionTime":"2026-01-30T21:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.931752 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 04:34:16.425567697 +0000 UTC Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.995455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.995540 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.995559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.995586 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:15 crc kubenswrapper[4869]: I0130 21:44:15.995613 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:15Z","lastTransitionTime":"2026-01-30T21:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.099589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.099653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.099670 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.099691 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.099707 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:16Z","lastTransitionTime":"2026-01-30T21:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.204008 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.204102 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.204127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.204162 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.204184 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:16Z","lastTransitionTime":"2026-01-30T21:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.307920 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.307986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.308015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.308042 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.308059 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:16Z","lastTransitionTime":"2026-01-30T21:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.411132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.411232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.411256 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.411293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.411315 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:16Z","lastTransitionTime":"2026-01-30T21:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.514305 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.514379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.514394 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.514422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.514442 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:16Z","lastTransitionTime":"2026-01-30T21:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.616673 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.616728 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.616746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.616771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.616790 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:16Z","lastTransitionTime":"2026-01-30T21:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.629987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.630035 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.630048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.630066 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.630078 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:16Z","lastTransitionTime":"2026-01-30T21:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:16 crc kubenswrapper[4869]: E0130 21:44:16.645989 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:16Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.651071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.651201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.651280 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.651359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.651430 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:16Z","lastTransitionTime":"2026-01-30T21:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:16 crc kubenswrapper[4869]: E0130 21:44:16.667010 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:16Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.671175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.671214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.671224 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.671240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.671252 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:16Z","lastTransitionTime":"2026-01-30T21:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:16 crc kubenswrapper[4869]: E0130 21:44:16.688096 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:16Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.691935 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.691960 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.691969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.691985 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.691997 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:16Z","lastTransitionTime":"2026-01-30T21:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:16 crc kubenswrapper[4869]: E0130 21:44:16.708648 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:16Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.713170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.713196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.713210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.713247 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.713258 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:16Z","lastTransitionTime":"2026-01-30T21:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:16 crc kubenswrapper[4869]: E0130 21:44:16.727432 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:16Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:16 crc kubenswrapper[4869]: E0130 21:44:16.727624 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.730054 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.730093 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.730105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.730126 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.730140 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:16Z","lastTransitionTime":"2026-01-30T21:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.832774 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.832809 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.832822 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.832838 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.832850 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:16Z","lastTransitionTime":"2026-01-30T21:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.932785 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:11:43.406211856 +0000 UTC Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.935399 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.935461 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.935474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.935489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:16 crc kubenswrapper[4869]: I0130 21:44:16.935499 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:16Z","lastTransitionTime":"2026-01-30T21:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.038526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.038561 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.038570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.038584 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.038594 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:17Z","lastTransitionTime":"2026-01-30T21:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.141818 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.141875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.141889 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.141927 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.141938 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:17Z","lastTransitionTime":"2026-01-30T21:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.244526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.244569 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.244580 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.244601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.244614 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:17Z","lastTransitionTime":"2026-01-30T21:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.347756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.347786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.347796 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.347825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.347837 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:17Z","lastTransitionTime":"2026-01-30T21:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.451395 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.451445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.451454 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.451472 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.451482 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:17Z","lastTransitionTime":"2026-01-30T21:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.555984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.556088 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.556116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.556154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.556185 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:17Z","lastTransitionTime":"2026-01-30T21:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.659861 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.659955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.659969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.659994 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.660013 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:17Z","lastTransitionTime":"2026-01-30T21:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.763656 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.763742 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.763766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.763797 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.763818 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:17Z","lastTransitionTime":"2026-01-30T21:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.866757 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.866814 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.866831 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.866853 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.866872 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:17Z","lastTransitionTime":"2026-01-30T21:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.876098 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.876194 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.876340 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:17 crc kubenswrapper[4869]: E0130 21:44:17.876344 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:17 crc kubenswrapper[4869]: E0130 21:44:17.876471 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:17 crc kubenswrapper[4869]: E0130 21:44:17.876566 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.876117 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:17 crc kubenswrapper[4869]: E0130 21:44:17.877310 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.877513 4869 scope.go:117] "RemoveContainer" containerID="53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.934016 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 08:24:43.891877779 +0000 UTC Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.970449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.970651 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.970747 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.970854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:17 crc kubenswrapper[4869]: I0130 21:44:17.970981 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:17Z","lastTransitionTime":"2026-01-30T21:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.074303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.074348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.074359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.074384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.074397 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:18Z","lastTransitionTime":"2026-01-30T21:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.177640 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.177689 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.177701 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.177719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.177731 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:18Z","lastTransitionTime":"2026-01-30T21:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.242223 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovnkube-controller/1.log" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.244911 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerStarted","Data":"81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c"} Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.245397 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.260436 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.277671 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.280261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.280321 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.280339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.280361 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.280377 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:18Z","lastTransitionTime":"2026-01-30T21:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.292119 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.309984 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"rics-daemon-45w6p\\\\nI0130 21:43:56.316255 6395 ovn.go:94] Posting a Warning event for Pod openshift-multus/network-metrics-daemon-45w6p\\\\nI0130 21:43:56.316118 6395 services_controller.go:356] Processing sync for service openshift-cluster-samples-operator/metrics for network=default\\\\nI0130 21:43:56.316307 6395 services_controller.go:360] Finished syncing service metrics on namespace openshift-cluster-samples-operator for network=default : 188.546µs\\\\nI0130 21:43:56.316294 6395 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0130 21:43:56.316355 6395 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0130 21:43:56.316373 6395 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:43:56.316416 6395 factory.go:656] Stopping watch factory\\\\nI0130 21:43:56.316446 6395 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:43:56.316450 6395 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 21:43:56.316446 6395 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0130 21:43:56.316482 6395 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:43:56.316581 6395 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.327357 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.351492 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.364690 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.377683 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.382207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.382244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.382255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.382269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.382279 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:18Z","lastTransitionTime":"2026-01-30T21:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.389590 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.400926 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.412112 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.423805 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f947b452-4e15-43ce-a4f4-3ca13d10f8d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754edbe89075ed676689e09cc4aceaa7e45560a1db966e12c22a96be3bd3eba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68975f2095b7dcd1eaf59dd358c4ac1b731116418a8991290d4ee2d66611f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://568c346025fcd6f3b511f60206c6836f445c3e1be15135c064fed0507d20aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.437724 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.451871 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.468583 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.482733 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.484227 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.484269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.484284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.484306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.484320 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:18Z","lastTransitionTime":"2026-01-30T21:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.503791 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.586548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.586626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.586637 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.586656 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.586668 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:18Z","lastTransitionTime":"2026-01-30T21:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.689413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.689442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.689450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.689463 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.689472 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:18Z","lastTransitionTime":"2026-01-30T21:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.791660 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.791756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.791782 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.791823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.791850 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:18Z","lastTransitionTime":"2026-01-30T21:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.894528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.894607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.894630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.894662 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.894684 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:18Z","lastTransitionTime":"2026-01-30T21:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.935715 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 00:47:45.173460729 +0000 UTC Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.996998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.997042 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.997051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.997064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:18 crc kubenswrapper[4869]: I0130 21:44:18.997074 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:18Z","lastTransitionTime":"2026-01-30T21:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.099987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.100058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.100072 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.100095 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.100109 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:19Z","lastTransitionTime":"2026-01-30T21:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.203290 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.203375 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.203396 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.203428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.203471 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:19Z","lastTransitionTime":"2026-01-30T21:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.252993 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovnkube-controller/2.log" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.254245 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovnkube-controller/1.log" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.258817 4869 generic.go:334] "Generic (PLEG): container finished" podID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerID="81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c" exitCode=1 Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.258878 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerDied","Data":"81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c"} Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.258973 4869 scope.go:117] "RemoveContainer" containerID="53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.260180 4869 scope.go:117] "RemoveContainer" containerID="81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c" Jan 30 21:44:19 crc kubenswrapper[4869]: E0130 21:44:19.260489 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\"" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.284010 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.308625 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.308678 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.308697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.308723 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.308743 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:19Z","lastTransitionTime":"2026-01-30T21:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.308982 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.329263 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.349514 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.366259 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f947b452-4e15-43ce-a4f4-3ca13d10f8d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754edbe89075ed676689e09cc4aceaa7e45560a1db966e12c22a96be3bd3eba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68975f2095b7dcd1eaf59dd358c4ac1b731116418a8991290d4ee2d66611f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://568c346025fcd6f3b511f60206c6836f445c3e1be15135c064fed0507d20aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.388666 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.407068 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.411824 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.411880 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.411927 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.411954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.411975 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:19Z","lastTransitionTime":"2026-01-30T21:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.430310 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.456241 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.477546 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.499058 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.515013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.515115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.515149 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.515194 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.515226 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:19Z","lastTransitionTime":"2026-01-30T21:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.525025 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.550246 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.570198 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.588034 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.618418 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.618509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.618530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.618561 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.618579 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:19Z","lastTransitionTime":"2026-01-30T21:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.624033 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"rics-daemon-45w6p\\\\nI0130 21:43:56.316255 6395 ovn.go:94] Posting a Warning event for Pod openshift-multus/network-metrics-daemon-45w6p\\\\nI0130 21:43:56.316118 6395 services_controller.go:356] Processing sync for service openshift-cluster-samples-operator/metrics for network=default\\\\nI0130 21:43:56.316307 6395 services_controller.go:360] Finished syncing service metrics on namespace openshift-cluster-samples-operator for network=default : 188.546µs\\\\nI0130 21:43:56.316294 6395 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0130 21:43:56.316355 6395 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0130 21:43:56.316373 6395 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:43:56.316416 6395 factory.go:656] Stopping watch factory\\\\nI0130 21:43:56.316446 6395 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:43:56.316450 6395 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 21:43:56.316446 6395 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0130 21:43:56.316482 6395 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:43:56.316581 6395 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:18Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:44:18.748549 6628 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0130 21:44:18.748601 6628 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0130 21:44:18.748636 6628 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0130 21:44:18.748776 6628 factory.go:1336] Added *v1.Node event handler 7\\\\nI0130 21:44:18.748890 6628 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0130 21:44:18.749458 6628 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:44:18.749579 6628 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:44:18.749620 6628 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:44:18.749653 6628 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:44:18.749736 6628 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.643027 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.723342 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.723408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.723430 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.723468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.723494 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:19Z","lastTransitionTime":"2026-01-30T21:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.827274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.827361 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.827388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.827428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.827514 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:19Z","lastTransitionTime":"2026-01-30T21:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.877164 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:19 crc kubenswrapper[4869]: E0130 21:44:19.877351 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.877645 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:19 crc kubenswrapper[4869]: E0130 21:44:19.877736 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.877836 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.877849 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:19 crc kubenswrapper[4869]: E0130 21:44:19.878067 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:19 crc kubenswrapper[4869]: E0130 21:44:19.878198 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.906090 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.920924 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.930823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.931034 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.931063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.931094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.931119 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:19Z","lastTransitionTime":"2026-01-30T21:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.935925 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:13:21.153146462 +0000 UTC Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.938726 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.959626 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:19 crc kubenswrapper[4869]: I0130 21:44:19.989260 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fdfcb96702330a4f64446223b5dbdcf2e031b0dc1ac2fbf9fac87ae5e51b60\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:43:56Z\\\",\\\"message\\\":\\\"rics-daemon-45w6p\\\\nI0130 21:43:56.316255 6395 ovn.go:94] Posting a Warning event for Pod openshift-multus/network-metrics-daemon-45w6p\\\\nI0130 21:43:56.316118 6395 services_controller.go:356] Processing sync for service openshift-cluster-samples-operator/metrics for network=default\\\\nI0130 21:43:56.316307 6395 services_controller.go:360] Finished syncing service metrics on namespace openshift-cluster-samples-operator for network=default : 188.546µs\\\\nI0130 21:43:56.316294 6395 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0130 21:43:56.316355 6395 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0130 21:43:56.316373 6395 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:43:56.316416 6395 factory.go:656] Stopping watch factory\\\\nI0130 21:43:56.316446 6395 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:43:56.316450 6395 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 21:43:56.316446 6395 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0130 21:43:56.316482 6395 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:43:56.316581 6395 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:18Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:44:18.748549 6628 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0130 21:44:18.748601 6628 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0130 21:44:18.748636 6628 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0130 21:44:18.748776 6628 factory.go:1336] Added *v1.Node event handler 7\\\\nI0130 21:44:18.748890 6628 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0130 21:44:18.749458 6628 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:44:18.749579 6628 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:44:18.749620 6628 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:44:18.749653 6628 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:44:18.749736 6628 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.006567 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.025671 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.034524 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.034611 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.034631 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.034688 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.034710 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:20Z","lastTransitionTime":"2026-01-30T21:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.041962 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.059846 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.077498 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.095185 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f947b452-4e15-43ce-a4f4-3ca13d10f8d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754edbe89075ed676689e09cc4aceaa7e45560a1db966e12c22a96be3bd3eba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68975f2095b7dcd1eaf59dd358c4ac1b731116418a8991290d4ee2d66611f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://568c346025fcd6f3b511f60206c6836f445c3e1be15135c064fed0507d20aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.113079 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.126671 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.138812 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.138870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.138888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.138940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.138962 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:20Z","lastTransitionTime":"2026-01-30T21:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.147304 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.177318 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.193056 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.210365 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.242347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.242393 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.242411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.242439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.242458 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:20Z","lastTransitionTime":"2026-01-30T21:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.266523 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovnkube-controller/2.log" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.273738 4869 scope.go:117] "RemoveContainer" containerID="81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c" Jan 30 21:44:20 crc kubenswrapper[4869]: E0130 21:44:20.274401 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\"" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.296515 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.323724 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.341712 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.345509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.345607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.345631 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.345664 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.345687 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:20Z","lastTransitionTime":"2026-01-30T21:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.362063 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.380921 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.400415 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.424789 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:18Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:44:18.748549 6628 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0130 21:44:18.748601 6628 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0130 21:44:18.748636 6628 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0130 21:44:18.748776 6628 factory.go:1336] Added *v1.Node event handler 7\\\\nI0130 21:44:18.748890 6628 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0130 21:44:18.749458 6628 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:44:18.749579 6628 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:44:18.749620 6628 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:44:18.749653 6628 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:44:18.749736 6628 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:44:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.440747 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.449451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.449553 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.449584 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.449618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.449643 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:20Z","lastTransitionTime":"2026-01-30T21:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.463396 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.485113 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.504611 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.524141 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.544510 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.553385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.553441 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.553455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.553479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.553496 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:20Z","lastTransitionTime":"2026-01-30T21:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.565659 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.585110 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f947b452-4e15-43ce-a4f4-3ca13d10f8d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754edbe89075ed676689e09cc4aceaa7e45560a1db966e12c22a96be3bd3eba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68975f2095b7dcd1eaf59dd358c4ac1b731116418a8991290d4ee2d66611f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://568c346025fcd6f3b511f60206c6836f445c3e1be15135c064fed0507d20aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.609037 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.624951 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.656725 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.656792 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.656807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.656831 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.656846 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:20Z","lastTransitionTime":"2026-01-30T21:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.760152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.760232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.760254 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.760292 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.760317 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:20Z","lastTransitionTime":"2026-01-30T21:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.863877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.863982 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.863994 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.864016 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.864031 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:20Z","lastTransitionTime":"2026-01-30T21:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.936593 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:36:56.831607096 +0000 UTC Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.972126 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.972216 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.972248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.972284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:20 crc kubenswrapper[4869]: I0130 21:44:20.972304 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:20Z","lastTransitionTime":"2026-01-30T21:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.075551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.075615 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.075645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.075661 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.075671 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:21Z","lastTransitionTime":"2026-01-30T21:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.180310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.180382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.180401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.180426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.180444 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:21Z","lastTransitionTime":"2026-01-30T21:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.283404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.283464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.283476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.283497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.283511 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:21Z","lastTransitionTime":"2026-01-30T21:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.386627 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.386697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.386719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.386746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.386766 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:21Z","lastTransitionTime":"2026-01-30T21:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.490367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.490438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.490456 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.490480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.490494 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:21Z","lastTransitionTime":"2026-01-30T21:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.594085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.594158 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.594183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.594215 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.594238 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:21Z","lastTransitionTime":"2026-01-30T21:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.697456 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.697508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.697525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.697548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.697565 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:21Z","lastTransitionTime":"2026-01-30T21:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.801763 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.802224 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.802441 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.802646 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.802853 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:21Z","lastTransitionTime":"2026-01-30T21:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.876747 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.876832 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:21 crc kubenswrapper[4869]: E0130 21:44:21.876918 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.877013 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.877034 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:21 crc kubenswrapper[4869]: E0130 21:44:21.877041 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:21 crc kubenswrapper[4869]: E0130 21:44:21.877205 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:21 crc kubenswrapper[4869]: E0130 21:44:21.877288 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.906541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.906588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.906599 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.906615 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.906629 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:21Z","lastTransitionTime":"2026-01-30T21:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:21 crc kubenswrapper[4869]: I0130 21:44:21.937269 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:58:46.273022604 +0000 UTC Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.008408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.008457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.008473 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.008491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.008504 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:22Z","lastTransitionTime":"2026-01-30T21:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.112416 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.112487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.112507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.112534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.112553 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:22Z","lastTransitionTime":"2026-01-30T21:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.215164 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.215509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.215735 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.215807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.215872 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:22Z","lastTransitionTime":"2026-01-30T21:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.318038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.318076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.318088 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.318103 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.318115 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:22Z","lastTransitionTime":"2026-01-30T21:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.420978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.421069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.421097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.421134 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.421165 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:22Z","lastTransitionTime":"2026-01-30T21:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.524361 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.524479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.524506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.524544 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.524578 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:22Z","lastTransitionTime":"2026-01-30T21:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.628716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.628798 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.628825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.628854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.628875 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:22Z","lastTransitionTime":"2026-01-30T21:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.732665 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.732748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.732760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.732779 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.732796 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:22Z","lastTransitionTime":"2026-01-30T21:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.837339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.837395 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.837413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.837443 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.837463 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:22Z","lastTransitionTime":"2026-01-30T21:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.937605 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:48:20.059177461 +0000 UTC Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.940581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.940644 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.940658 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.940679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:22 crc kubenswrapper[4869]: I0130 21:44:22.940695 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:22Z","lastTransitionTime":"2026-01-30T21:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.044487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.044547 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.044561 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.044581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.044595 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:23Z","lastTransitionTime":"2026-01-30T21:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.151712 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.152203 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.152225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.152258 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.152283 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:23Z","lastTransitionTime":"2026-01-30T21:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.255540 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.255619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.255639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.255669 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.255736 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:23Z","lastTransitionTime":"2026-01-30T21:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.359245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.359310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.359327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.359353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.359371 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:23Z","lastTransitionTime":"2026-01-30T21:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.462797 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.462848 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.462862 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.462888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.462932 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:23Z","lastTransitionTime":"2026-01-30T21:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.567810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.567857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.567870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.567912 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.567929 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:23Z","lastTransitionTime":"2026-01-30T21:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.670869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.670949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.670968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.670994 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.671013 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:23Z","lastTransitionTime":"2026-01-30T21:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.774049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.774115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.774137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.774165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.774186 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:23Z","lastTransitionTime":"2026-01-30T21:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.875954 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.875969 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.876034 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:23 crc kubenswrapper[4869]: E0130 21:44:23.876277 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:23 crc kubenswrapper[4869]: E0130 21:44:23.876380 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.876418 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:23 crc kubenswrapper[4869]: E0130 21:44:23.876477 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:23 crc kubenswrapper[4869]: E0130 21:44:23.877376 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.879078 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.879153 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.879180 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.879208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.879233 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:23Z","lastTransitionTime":"2026-01-30T21:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.937755 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:06:40.676275661 +0000 UTC Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.982075 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.982152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.982168 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.982192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:23 crc kubenswrapper[4869]: I0130 21:44:23.982236 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:23Z","lastTransitionTime":"2026-01-30T21:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.084199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.084264 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.084273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.084290 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.084301 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:24Z","lastTransitionTime":"2026-01-30T21:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.188099 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.188174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.188201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.188235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.188262 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:24Z","lastTransitionTime":"2026-01-30T21:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.291272 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.291544 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.291624 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.291721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.291807 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:24Z","lastTransitionTime":"2026-01-30T21:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.394537 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.394618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.394636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.394664 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.394679 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:24Z","lastTransitionTime":"2026-01-30T21:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.498321 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.498382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.498398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.498420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.498435 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:24Z","lastTransitionTime":"2026-01-30T21:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.601387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.601465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.601475 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.601514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.601528 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:24Z","lastTransitionTime":"2026-01-30T21:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.704636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.704692 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.704702 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.704719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.704730 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:24Z","lastTransitionTime":"2026-01-30T21:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.807550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.807619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.807632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.807660 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.807672 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:24Z","lastTransitionTime":"2026-01-30T21:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.910391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.910439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.910450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.910468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.910479 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:24Z","lastTransitionTime":"2026-01-30T21:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:24 crc kubenswrapper[4869]: I0130 21:44:24.938723 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:04:50.590312342 +0000 UTC Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.013576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.013628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.013645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.013665 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.013676 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:25Z","lastTransitionTime":"2026-01-30T21:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.116447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.116484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.116495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.116509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.116520 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:25Z","lastTransitionTime":"2026-01-30T21:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.218871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.218943 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.218958 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.218981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.218999 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:25Z","lastTransitionTime":"2026-01-30T21:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.320812 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.320856 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.320867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.320883 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.320907 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:25Z","lastTransitionTime":"2026-01-30T21:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.423748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.423806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.423820 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.423842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.423855 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:25Z","lastTransitionTime":"2026-01-30T21:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.526852 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.526913 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.526923 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.526939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.526952 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:25Z","lastTransitionTime":"2026-01-30T21:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.630638 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.630684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.630699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.630725 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.630740 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:25Z","lastTransitionTime":"2026-01-30T21:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.734143 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.734420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.734452 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.734489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.734519 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:25Z","lastTransitionTime":"2026-01-30T21:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.837309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.837355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.837370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.837387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.837399 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:25Z","lastTransitionTime":"2026-01-30T21:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.876140 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.876188 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.876312 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:25 crc kubenswrapper[4869]: E0130 21:44:25.876327 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.876373 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:25 crc kubenswrapper[4869]: E0130 21:44:25.876497 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:25 crc kubenswrapper[4869]: E0130 21:44:25.876579 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:25 crc kubenswrapper[4869]: E0130 21:44:25.876642 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.938970 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:14:40.872572425 +0000 UTC Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.940528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.940596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.940612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.940636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:25 crc kubenswrapper[4869]: I0130 21:44:25.940652 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:25Z","lastTransitionTime":"2026-01-30T21:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.043753 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.043798 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.043810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.043828 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.043841 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:26Z","lastTransitionTime":"2026-01-30T21:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.146255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.146296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.146308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.146356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.146367 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:26Z","lastTransitionTime":"2026-01-30T21:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.249096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.249138 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.249148 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.249161 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.249171 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:26Z","lastTransitionTime":"2026-01-30T21:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.352943 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.353014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.353028 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.353052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.353073 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:26Z","lastTransitionTime":"2026-01-30T21:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.455298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.455339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.455350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.455364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.455374 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:26Z","lastTransitionTime":"2026-01-30T21:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.494199 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs\") pod \"network-metrics-daemon-45w6p\" (UID: \"b980f4db-64d3-48c9-9ff8-18f23c4888cd\") " pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:26 crc kubenswrapper[4869]: E0130 21:44:26.494343 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:44:26 crc kubenswrapper[4869]: E0130 21:44:26.494397 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs podName:b980f4db-64d3-48c9-9ff8-18f23c4888cd nodeName:}" failed. No retries permitted until 2026-01-30 21:44:58.494381298 +0000 UTC m=+99.380139323 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs") pod "network-metrics-daemon-45w6p" (UID: "b980f4db-64d3-48c9-9ff8-18f23c4888cd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.558018 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.558051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.558061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.558089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.558099 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:26Z","lastTransitionTime":"2026-01-30T21:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.661071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.661117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.661133 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.661149 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.661161 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:26Z","lastTransitionTime":"2026-01-30T21:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.763468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.763526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.763534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.763550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.763561 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:26Z","lastTransitionTime":"2026-01-30T21:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.865878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.865965 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.865977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.865994 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.866006 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:26Z","lastTransitionTime":"2026-01-30T21:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.939465 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:28:25.655150686 +0000 UTC Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.944342 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.944402 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.944416 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.944440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.944454 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:26Z","lastTransitionTime":"2026-01-30T21:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:26 crc kubenswrapper[4869]: E0130 21:44:26.957833 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.961762 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.961830 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.961845 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.961866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.961906 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:26Z","lastTransitionTime":"2026-01-30T21:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:26 crc kubenswrapper[4869]: E0130 21:44:26.975799 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.979838 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.979925 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.979947 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.979968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.980005 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:26Z","lastTransitionTime":"2026-01-30T21:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:26 crc kubenswrapper[4869]: E0130 21:44:26.994149 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.997984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.998026 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.998039 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.998060 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:26 crc kubenswrapper[4869]: I0130 21:44:26.998073 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:26Z","lastTransitionTime":"2026-01-30T21:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:27 crc kubenswrapper[4869]: E0130 21:44:27.017206 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.020582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.020741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.020827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.020931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.021014 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:27Z","lastTransitionTime":"2026-01-30T21:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:27 crc kubenswrapper[4869]: E0130 21:44:27.032678 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:27 crc kubenswrapper[4869]: E0130 21:44:27.033166 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.035867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.035942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.035957 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.035977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.036003 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:27Z","lastTransitionTime":"2026-01-30T21:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.139423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.139483 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.139496 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.139517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.139530 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:27Z","lastTransitionTime":"2026-01-30T21:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.243093 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.243146 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.243156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.243174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.243186 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:27Z","lastTransitionTime":"2026-01-30T21:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.346144 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.346267 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.346290 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.346322 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.346342 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:27Z","lastTransitionTime":"2026-01-30T21:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.449488 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.449556 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.449575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.449601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.449626 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:27Z","lastTransitionTime":"2026-01-30T21:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.553076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.553126 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.553135 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.553152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.553162 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:27Z","lastTransitionTime":"2026-01-30T21:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.656883 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.657020 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.657044 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.657079 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.657100 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:27Z","lastTransitionTime":"2026-01-30T21:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.760392 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.760459 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.760482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.760508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.760525 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:27Z","lastTransitionTime":"2026-01-30T21:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.862705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.862765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.862778 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.862797 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.862809 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:27Z","lastTransitionTime":"2026-01-30T21:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.876561 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.876641 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:27 crc kubenswrapper[4869]: E0130 21:44:27.876682 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.876712 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:27 crc kubenswrapper[4869]: E0130 21:44:27.876728 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:27 crc kubenswrapper[4869]: E0130 21:44:27.876804 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.876858 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:27 crc kubenswrapper[4869]: E0130 21:44:27.876948 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.939950 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 16:07:36.609767198 +0000 UTC Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.965618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.965678 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.965691 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.965711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:27 crc kubenswrapper[4869]: I0130 21:44:27.965722 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:27Z","lastTransitionTime":"2026-01-30T21:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.069248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.069313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.069334 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.069363 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.069381 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:28Z","lastTransitionTime":"2026-01-30T21:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.173179 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.173249 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.173271 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.173297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.173315 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:28Z","lastTransitionTime":"2026-01-30T21:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.276474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.276544 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.276564 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.276590 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.276605 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:28Z","lastTransitionTime":"2026-01-30T21:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.380563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.380605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.380616 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.380632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.380643 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:28Z","lastTransitionTime":"2026-01-30T21:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.483470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.483519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.483527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.483542 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.483586 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:28Z","lastTransitionTime":"2026-01-30T21:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.585590 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.585639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.585652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.585673 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.585690 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:28Z","lastTransitionTime":"2026-01-30T21:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.688257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.688303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.688314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.688332 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.688346 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:28Z","lastTransitionTime":"2026-01-30T21:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.790921 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.790985 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.790999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.791015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.791025 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:28Z","lastTransitionTime":"2026-01-30T21:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.893381 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.893423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.893434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.893451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.893463 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:28Z","lastTransitionTime":"2026-01-30T21:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.940696 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 14:52:38.458469061 +0000 UTC Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.997093 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.997163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.997185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.997215 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:28 crc kubenswrapper[4869]: I0130 21:44:28.997237 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:28Z","lastTransitionTime":"2026-01-30T21:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.099297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.099335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.099343 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.099364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.099379 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:29Z","lastTransitionTime":"2026-01-30T21:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.201961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.202005 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.202018 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.202035 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.202048 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:29Z","lastTransitionTime":"2026-01-30T21:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.303952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.304088 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.304119 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.304153 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.304181 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:29Z","lastTransitionTime":"2026-01-30T21:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.304927 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tz8jn_dac3c503-e284-4df8-ae5e-0084a884e456/kube-multus/0.log" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.305001 4869 generic.go:334] "Generic (PLEG): container finished" podID="dac3c503-e284-4df8-ae5e-0084a884e456" containerID="6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061" exitCode=1 Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.305038 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tz8jn" event={"ID":"dac3c503-e284-4df8-ae5e-0084a884e456","Type":"ContainerDied","Data":"6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061"} Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.305631 4869 scope.go:117] "RemoveContainer" containerID="6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.324091 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.337033 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.355138 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.370952 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:28Z\\\",\\\"message\\\":\\\"2026-01-30T21:43:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d7736fda-c03e-486c-aa38-7f904c4e5cbe\\\\n2026-01-30T21:43:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d7736fda-c03e-486c-aa38-7f904c4e5cbe to /host/opt/cni/bin/\\\\n2026-01-30T21:43:43Z [verbose] multus-daemon started\\\\n2026-01-30T21:43:43Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:44:28Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.390477 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:18Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:44:18.748549 6628 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0130 21:44:18.748601 6628 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0130 21:44:18.748636 6628 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0130 21:44:18.748776 6628 factory.go:1336] Added *v1.Node event handler 7\\\\nI0130 21:44:18.748890 6628 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0130 21:44:18.749458 6628 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:44:18.749579 6628 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:44:18.749620 6628 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:44:18.749653 6628 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:44:18.749736 6628 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:44:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.401545 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.406570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.406610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.406623 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.406640 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.406653 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:29Z","lastTransitionTime":"2026-01-30T21:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.414321 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.426489 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.438157 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.449569 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.460843 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f947b452-4e15-43ce-a4f4-3ca13d10f8d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754edbe89075ed676689e09cc4aceaa7e45560a1db966e12c22a96be3bd3eba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68975f2095b7dcd1eaf59dd358c4ac1b731116418a8991290d4ee2d66611f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://568c346025fcd6f3b511f60206c6836f445c3e1be15135c064fed0507d20aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.471999 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.483153 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.495031 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.508975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.509010 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.509020 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.509037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.509048 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:29Z","lastTransitionTime":"2026-01-30T21:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.509073 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.519149 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.530178 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.611449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.611495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.611505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.611520 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.611529 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:29Z","lastTransitionTime":"2026-01-30T21:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.713425 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.713474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.713486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.713503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.713517 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:29Z","lastTransitionTime":"2026-01-30T21:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.815817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.815871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.815882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.815933 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.815947 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:29Z","lastTransitionTime":"2026-01-30T21:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.876547 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.876635 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.876662 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.876841 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:29 crc kubenswrapper[4869]: E0130 21:44:29.876830 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:29 crc kubenswrapper[4869]: E0130 21:44:29.877019 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:29 crc kubenswrapper[4869]: E0130 21:44:29.877109 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:29 crc kubenswrapper[4869]: E0130 21:44:29.877194 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.892504 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f947b452-4e15-43ce-a4f4-3ca13d10f8d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754edbe89075ed676689e09cc4aceaa7e45560a1db966e12c22a96be3bd3eba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68975f2095b7dcd1eaf59dd358c4ac1b731116418a8991290d4ee2d66611f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://568c346025fcd6f3b511f60206c6836f445c3e1be15135c064fed0507d20aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.907582 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.917863 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.917925 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.917945 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.917960 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.917969 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:29Z","lastTransitionTime":"2026-01-30T21:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.919766 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.931100 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.941849 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:52:39.216331396 +0000 UTC Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.946821 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.957890 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.973177 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:29 crc kubenswrapper[4869]: I0130 21:44:29.986492 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:28Z\\\",\\\"message\\\":\\\"2026-01-30T21:43:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d7736fda-c03e-486c-aa38-7f904c4e5cbe\\\\n2026-01-30T21:43:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d7736fda-c03e-486c-aa38-7f904c4e5cbe to /host/opt/cni/bin/\\\\n2026-01-30T21:43:43Z [verbose] multus-daemon started\\\\n2026-01-30T21:43:43Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:44:28Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.007630 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:18Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:44:18.748549 6628 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0130 21:44:18.748601 6628 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0130 21:44:18.748636 6628 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0130 21:44:18.748776 6628 factory.go:1336] Added *v1.Node event handler 7\\\\nI0130 21:44:18.748890 6628 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0130 21:44:18.749458 6628 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:44:18.749579 6628 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:44:18.749620 6628 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:44:18.749653 6628 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:44:18.749736 6628 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:44:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.020187 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.020333 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.020501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.020515 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.020536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.020550 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:30Z","lastTransitionTime":"2026-01-30T21:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.035809 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.051738 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.062582 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.075200 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.089544 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.101577 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.114748 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.122558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.122610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.122621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.122640 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.122652 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:30Z","lastTransitionTime":"2026-01-30T21:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.226205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.226244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.226253 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.226267 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.226276 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:30Z","lastTransitionTime":"2026-01-30T21:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.310345 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tz8jn_dac3c503-e284-4df8-ae5e-0084a884e456/kube-multus/0.log" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.310482 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tz8jn" event={"ID":"dac3c503-e284-4df8-ae5e-0084a884e456","Type":"ContainerStarted","Data":"09e707217ddad89be77f68915c4948c1bbc2e44066f16cce7e255a5a91c1e101"} Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.328205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.328250 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.328150 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.328261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.328448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.328474 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:30Z","lastTransitionTime":"2026-01-30T21:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.342643 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.357643 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f947b452-4e15-43ce-a4f4-3ca13d10f8d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754edbe89075ed676689e09cc4aceaa7e45560a1db966e12c22a96be3bd3eba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68975f2095b7dcd1eaf59dd358c4ac1b731116418a8991290d4ee2d66611f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://568c346025fcd6f3b511f60206c6836f445c3e1be15135c064fed0507d20aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.373904 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.384373 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.396287 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.409138 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.446455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.446514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.446529 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.446551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.446574 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:30Z","lastTransitionTime":"2026-01-30T21:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.446623 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.466796 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.484387 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09e707217ddad89be77f68915c4948c1bbc2e44066f16cce7e255a5a91c1e101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:28Z\\\",\\\"message\\\":\\\"2026-01-30T21:43:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d7736fda-c03e-486c-aa38-7f904c4e5cbe\\\\n2026-01-30T21:43:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d7736fda-c03e-486c-aa38-7f904c4e5cbe to /host/opt/cni/bin/\\\\n2026-01-30T21:43:43Z [verbose] multus-daemon started\\\\n2026-01-30T21:43:43Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:44:28Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.505398 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:18Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:44:18.748549 6628 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0130 21:44:18.748601 6628 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0130 21:44:18.748636 6628 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0130 21:44:18.748776 6628 factory.go:1336] Added *v1.Node event handler 7\\\\nI0130 21:44:18.748890 6628 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0130 21:44:18.749458 6628 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:44:18.749579 6628 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:44:18.749620 6628 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:44:18.749653 6628 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:44:18.749736 6628 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:44:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.516512 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.531832 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.547142 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.549345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.549406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.549434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.549451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.549462 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:30Z","lastTransitionTime":"2026-01-30T21:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.564443 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.580216 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.597466 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.652294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.652355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.652373 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.652400 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.652420 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:30Z","lastTransitionTime":"2026-01-30T21:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.755952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.756033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.756054 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.756086 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.756105 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:30Z","lastTransitionTime":"2026-01-30T21:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.858942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.859002 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.859021 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.859044 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.859061 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:30Z","lastTransitionTime":"2026-01-30T21:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.877744 4869 scope.go:117] "RemoveContainer" containerID="81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c" Jan 30 21:44:30 crc kubenswrapper[4869]: E0130 21:44:30.878213 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\"" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.943069 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:24:47.402520435 +0000 UTC Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.962586 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.962642 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.962658 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.962684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:30 crc kubenswrapper[4869]: I0130 21:44:30.962701 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:30Z","lastTransitionTime":"2026-01-30T21:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.066246 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.066291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.066316 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.066338 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.066351 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:31Z","lastTransitionTime":"2026-01-30T21:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.169733 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.169782 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.169795 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.169811 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.169823 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:31Z","lastTransitionTime":"2026-01-30T21:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.272677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.272740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.272758 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.272816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.272834 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:31Z","lastTransitionTime":"2026-01-30T21:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.375664 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.375699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.375707 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.375720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.375729 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:31Z","lastTransitionTime":"2026-01-30T21:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.478375 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.478445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.478467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.478491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.478508 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:31Z","lastTransitionTime":"2026-01-30T21:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.580955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.581016 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.581034 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.581059 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.581073 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:31Z","lastTransitionTime":"2026-01-30T21:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.684149 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.684269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.684379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.684485 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.684561 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:31Z","lastTransitionTime":"2026-01-30T21:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.787963 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.788001 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.788010 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.788051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.788062 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:31Z","lastTransitionTime":"2026-01-30T21:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.876962 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.877130 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.876984 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:31 crc kubenswrapper[4869]: E0130 21:44:31.877207 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.877227 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:31 crc kubenswrapper[4869]: E0130 21:44:31.877289 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:31 crc kubenswrapper[4869]: E0130 21:44:31.877405 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:31 crc kubenswrapper[4869]: E0130 21:44:31.877565 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.890449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.890495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.890505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.890520 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.890531 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:31Z","lastTransitionTime":"2026-01-30T21:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.944160 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:41:53.774025731 +0000 UTC Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.992621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.992709 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.992727 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.992755 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:31 crc kubenswrapper[4869]: I0130 21:44:31.992775 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:31Z","lastTransitionTime":"2026-01-30T21:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.095310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.095352 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.095365 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.095380 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.095390 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:32Z","lastTransitionTime":"2026-01-30T21:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.197836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.197879 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.197906 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.197923 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.197933 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:32Z","lastTransitionTime":"2026-01-30T21:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.300682 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.300722 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.300730 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.300744 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.300754 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:32Z","lastTransitionTime":"2026-01-30T21:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.403683 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.403724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.403733 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.403748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.403761 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:32Z","lastTransitionTime":"2026-01-30T21:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.506058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.506110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.506123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.506141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.506151 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:32Z","lastTransitionTime":"2026-01-30T21:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.608325 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.608376 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.608384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.608398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.608407 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:32Z","lastTransitionTime":"2026-01-30T21:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.711622 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.711703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.711713 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.711731 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.711743 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:32Z","lastTransitionTime":"2026-01-30T21:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.814658 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.814710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.814724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.814741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.814755 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:32Z","lastTransitionTime":"2026-01-30T21:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.918167 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.918223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.918238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.918263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.918280 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:32Z","lastTransitionTime":"2026-01-30T21:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:32 crc kubenswrapper[4869]: I0130 21:44:32.944628 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 12:41:37.118387716 +0000 UTC Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.021214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.021267 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.021278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.021296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.021308 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:33Z","lastTransitionTime":"2026-01-30T21:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.123418 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.123451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.123460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.123474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.123483 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:33Z","lastTransitionTime":"2026-01-30T21:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.225696 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.225741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.225754 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.225770 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.225780 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:33Z","lastTransitionTime":"2026-01-30T21:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.327627 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.327660 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.327670 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.327683 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.327691 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:33Z","lastTransitionTime":"2026-01-30T21:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.430427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.430475 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.430487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.430503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.430513 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:33Z","lastTransitionTime":"2026-01-30T21:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.532781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.532824 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.532837 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.532855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.532872 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:33Z","lastTransitionTime":"2026-01-30T21:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.636196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.636280 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.636301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.636332 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.636350 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:33Z","lastTransitionTime":"2026-01-30T21:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.739536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.739590 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.739607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.739630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.739648 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:33Z","lastTransitionTime":"2026-01-30T21:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.842281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.842325 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.842336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.842354 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.842368 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:33Z","lastTransitionTime":"2026-01-30T21:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.877149 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.877148 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.877243 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:33 crc kubenswrapper[4869]: E0130 21:44:33.877355 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.877604 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:33 crc kubenswrapper[4869]: E0130 21:44:33.877748 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:33 crc kubenswrapper[4869]: E0130 21:44:33.877830 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:33 crc kubenswrapper[4869]: E0130 21:44:33.878005 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.894576 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.944718 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 11:17:45.287626527 +0000 UTC Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.945366 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.945398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.945412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.945433 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:33 crc kubenswrapper[4869]: I0130 21:44:33.945446 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:33Z","lastTransitionTime":"2026-01-30T21:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.048409 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.048447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.048459 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.048476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.048488 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:34Z","lastTransitionTime":"2026-01-30T21:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.151053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.151108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.151126 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.151148 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.151165 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:34Z","lastTransitionTime":"2026-01-30T21:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.253628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.253663 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.253674 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.253690 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.253699 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:34Z","lastTransitionTime":"2026-01-30T21:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.355765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.355816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.355829 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.355850 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.355863 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:34Z","lastTransitionTime":"2026-01-30T21:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.457676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.457708 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.457715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.457731 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.457740 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:34Z","lastTransitionTime":"2026-01-30T21:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.560050 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.560090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.560103 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.560124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.560139 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:34Z","lastTransitionTime":"2026-01-30T21:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.662685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.662722 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.662732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.662745 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.662756 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:34Z","lastTransitionTime":"2026-01-30T21:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.765962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.766063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.766097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.766145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.766174 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:34Z","lastTransitionTime":"2026-01-30T21:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.868555 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.868606 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.868616 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.868633 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.868643 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:34Z","lastTransitionTime":"2026-01-30T21:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.945533 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 02:58:13.49425028 +0000 UTC Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.970952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.970989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.971027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.971043 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:34 crc kubenswrapper[4869]: I0130 21:44:34.971052 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:34Z","lastTransitionTime":"2026-01-30T21:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.073168 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.073212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.073223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.073238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.073248 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:35Z","lastTransitionTime":"2026-01-30T21:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.175051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.175084 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.175092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.175125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.175137 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:35Z","lastTransitionTime":"2026-01-30T21:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.277708 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.277740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.277749 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.277762 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.277772 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:35Z","lastTransitionTime":"2026-01-30T21:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.382248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.382342 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.382366 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.382395 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.382414 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:35Z","lastTransitionTime":"2026-01-30T21:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.485121 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.485201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.485214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.485258 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.485274 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:35Z","lastTransitionTime":"2026-01-30T21:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.587575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.587618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.587629 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.587644 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.587654 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:35Z","lastTransitionTime":"2026-01-30T21:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.689385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.689424 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.689450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.689468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.689479 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:35Z","lastTransitionTime":"2026-01-30T21:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.792613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.792701 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.792726 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.792798 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.792826 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:35Z","lastTransitionTime":"2026-01-30T21:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.876048 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.876140 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.876228 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.876253 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:35 crc kubenswrapper[4869]: E0130 21:44:35.877000 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:35 crc kubenswrapper[4869]: E0130 21:44:35.877100 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:35 crc kubenswrapper[4869]: E0130 21:44:35.877104 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:35 crc kubenswrapper[4869]: E0130 21:44:35.877180 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.895746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.895807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.895820 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.895840 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.895853 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:35Z","lastTransitionTime":"2026-01-30T21:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.945709 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 22:45:12.844552118 +0000 UTC Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.998546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.998627 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.998658 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.998692 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:35 crc kubenswrapper[4869]: I0130 21:44:35.998714 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:35Z","lastTransitionTime":"2026-01-30T21:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.102638 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.102826 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.102846 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.102866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.102887 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:36Z","lastTransitionTime":"2026-01-30T21:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.206794 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.206862 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.206875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.206911 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.206924 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:36Z","lastTransitionTime":"2026-01-30T21:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.310025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.310063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.310075 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.310091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.310107 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:36Z","lastTransitionTime":"2026-01-30T21:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.412771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.412826 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.412845 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.412869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.412883 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:36Z","lastTransitionTime":"2026-01-30T21:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.515640 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.516164 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.516257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.516331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.516505 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:36Z","lastTransitionTime":"2026-01-30T21:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.619659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.619748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.619768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.619796 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.619815 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:36Z","lastTransitionTime":"2026-01-30T21:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.723762 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.724464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.724721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.725184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.725430 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:36Z","lastTransitionTime":"2026-01-30T21:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.830168 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.830223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.830235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.830255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.830268 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:36Z","lastTransitionTime":"2026-01-30T21:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.933279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.933311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.933322 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.933335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.933346 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:36Z","lastTransitionTime":"2026-01-30T21:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:36 crc kubenswrapper[4869]: I0130 21:44:36.946555 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:10:40.866863823 +0000 UTC Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.036783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.036827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.036840 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.036857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.036871 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:37Z","lastTransitionTime":"2026-01-30T21:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.139542 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.139592 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.139612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.139632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.139646 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:37Z","lastTransitionTime":"2026-01-30T21:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.168444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.168512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.168531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.168557 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.168575 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:37Z","lastTransitionTime":"2026-01-30T21:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:37 crc kubenswrapper[4869]: E0130 21:44:37.188272 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.193314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.193357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.193371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.193387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.193399 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:37Z","lastTransitionTime":"2026-01-30T21:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:37 crc kubenswrapper[4869]: E0130 21:44:37.210946 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.215098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.215139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.215152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.215169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.215182 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:37Z","lastTransitionTime":"2026-01-30T21:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:37 crc kubenswrapper[4869]: E0130 21:44:37.232190 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.236349 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.236419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.236437 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.236462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.236486 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:37Z","lastTransitionTime":"2026-01-30T21:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:37 crc kubenswrapper[4869]: E0130 21:44:37.250331 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.254663 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.254717 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.254735 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.254758 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.254775 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:37Z","lastTransitionTime":"2026-01-30T21:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:37 crc kubenswrapper[4869]: E0130 21:44:37.274363 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:37 crc kubenswrapper[4869]: E0130 21:44:37.274509 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.276212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.276348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.276371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.276478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.276807 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:37Z","lastTransitionTime":"2026-01-30T21:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.379354 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.379416 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.379434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.379460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.379481 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:37Z","lastTransitionTime":"2026-01-30T21:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.483016 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.483090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.483112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.483137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.483154 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:37Z","lastTransitionTime":"2026-01-30T21:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.585999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.586049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.586062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.586080 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.586092 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:37Z","lastTransitionTime":"2026-01-30T21:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.689441 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.689528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.689547 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.689571 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.689590 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:37Z","lastTransitionTime":"2026-01-30T21:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.791384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.791441 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.791464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.791493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.791512 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:37Z","lastTransitionTime":"2026-01-30T21:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.876842 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.877010 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:37 crc kubenswrapper[4869]: E0130 21:44:37.877050 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:37 crc kubenswrapper[4869]: E0130 21:44:37.877270 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.878095 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.878323 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:37 crc kubenswrapper[4869]: E0130 21:44:37.878565 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:37 crc kubenswrapper[4869]: E0130 21:44:37.878776 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.893745 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.893819 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.893839 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.893862 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.893880 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:37Z","lastTransitionTime":"2026-01-30T21:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.947275 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 21:47:48.757863332 +0000 UTC Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.997337 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.997397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.997423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.997452 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:37 crc kubenswrapper[4869]: I0130 21:44:37.997475 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:37Z","lastTransitionTime":"2026-01-30T21:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.101250 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.101300 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.101313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.101331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.101343 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:38Z","lastTransitionTime":"2026-01-30T21:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.204225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.204347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.204377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.204408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.204431 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:38Z","lastTransitionTime":"2026-01-30T21:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.308195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.308263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.308282 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.308314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.308334 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:38Z","lastTransitionTime":"2026-01-30T21:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.413207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.413262 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.413277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.413308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.413322 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:38Z","lastTransitionTime":"2026-01-30T21:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.516415 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.516458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.516470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.516487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.516501 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:38Z","lastTransitionTime":"2026-01-30T21:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.620500 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.620558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.620572 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.620598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.620615 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:38Z","lastTransitionTime":"2026-01-30T21:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.723344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.723413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.723433 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.723460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.723477 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:38Z","lastTransitionTime":"2026-01-30T21:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.826930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.826977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.826989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.827006 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.827019 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:38Z","lastTransitionTime":"2026-01-30T21:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.929260 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.929326 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.929344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.929371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.929402 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:38Z","lastTransitionTime":"2026-01-30T21:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:38 crc kubenswrapper[4869]: I0130 21:44:38.947644 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 18:13:08.239406005 +0000 UTC Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.032057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.032103 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.032115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.032136 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.032146 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:39Z","lastTransitionTime":"2026-01-30T21:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.134815 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.134938 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.134967 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.135001 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.135030 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:39Z","lastTransitionTime":"2026-01-30T21:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.239008 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.239063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.239076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.239097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.239112 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:39Z","lastTransitionTime":"2026-01-30T21:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.341765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.341823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.341837 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.341860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.341875 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:39Z","lastTransitionTime":"2026-01-30T21:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.444953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.445071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.445092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.445123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.445141 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:39Z","lastTransitionTime":"2026-01-30T21:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.549229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.549297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.549315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.549344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.549363 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:39Z","lastTransitionTime":"2026-01-30T21:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.652271 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.652349 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.652364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.652387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.652404 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:39Z","lastTransitionTime":"2026-01-30T21:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.755953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.756025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.756045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.756073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.756092 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:39Z","lastTransitionTime":"2026-01-30T21:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.860191 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.860247 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.860259 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.860280 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.860296 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:39Z","lastTransitionTime":"2026-01-30T21:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.876670 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:39 crc kubenswrapper[4869]: E0130 21:44:39.876801 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.876829 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.876876 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.876975 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:39 crc kubenswrapper[4869]: E0130 21:44:39.877060 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:39 crc kubenswrapper[4869]: E0130 21:44:39.877224 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:39 crc kubenswrapper[4869]: E0130 21:44:39.877643 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.901782 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.917256 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.934205 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.948128 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:07:37.746622477 +0000 UTC Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.956362 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.963539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.963655 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.963682 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.963742 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.963788 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:39Z","lastTransitionTime":"2026-01-30T21:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:39 crc kubenswrapper[4869]: I0130 21:44:39.982635 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0cf51fe-c6b0-432d-934e-df3b70d5895c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7301698629901946ea3fa72f6c03cac1a88d253d9e14f4a8c225c8ff390fc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e817aca3610a40823460973541a6bb549f4d88d55919c93640ae8c6dc0874946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e817aca3610a40823460973541a6bb549f4d88d55919c93640ae8c6dc0874946\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:39.999882 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.020150 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.039162 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09e707217ddad89be77f68915c4948c1bbc2e44066f16cce7e255a5a91c1e101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:28Z\\\",\\\"message\\\":\\\"2026-01-30T21:43:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d7736fda-c03e-486c-aa38-7f904c4e5cbe\\\\n2026-01-30T21:43:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d7736fda-c03e-486c-aa38-7f904c4e5cbe to /host/opt/cni/bin/\\\\n2026-01-30T21:43:43Z [verbose] multus-daemon started\\\\n2026-01-30T21:43:43Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:44:28Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.068105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.068170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.068187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.068216 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.068235 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:40Z","lastTransitionTime":"2026-01-30T21:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.070031 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:18Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:44:18.748549 6628 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0130 21:44:18.748601 6628 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0130 21:44:18.748636 6628 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0130 21:44:18.748776 6628 factory.go:1336] Added *v1.Node event handler 7\\\\nI0130 21:44:18.748890 6628 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0130 21:44:18.749458 6628 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:44:18.749579 6628 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:44:18.749620 6628 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:44:18.749653 6628 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:44:18.749736 6628 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:44:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.084675 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.100785 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.119261 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.136118 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.155982 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.171273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.171335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.171351 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.171376 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.171393 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:40Z","lastTransitionTime":"2026-01-30T21:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.175230 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.195513 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.211470 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.228299 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f947b452-4e15-43ce-a4f4-3ca13d10f8d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754edbe89075ed676689e09cc4aceaa7e45560a1db966e12c22a96be3bd3eba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68975f2095b7dcd1eaf59dd358c4ac1b731116418a8991290d4ee2d66611f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://568c346025fcd6f3b511f60206c6836f445c3e1be15135c064fed0507d20aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.273661 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.273728 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.273746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.273772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.273793 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:40Z","lastTransitionTime":"2026-01-30T21:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.376451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.376499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.376512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.376528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.376539 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:40Z","lastTransitionTime":"2026-01-30T21:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.480248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.480316 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.480337 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.480364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.480382 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:40Z","lastTransitionTime":"2026-01-30T21:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.583879 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.583954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.583972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.583994 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.584012 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:40Z","lastTransitionTime":"2026-01-30T21:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.688088 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.688488 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.688508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.688562 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.688600 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:40Z","lastTransitionTime":"2026-01-30T21:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.791584 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.791650 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.791663 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.791687 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.791700 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:40Z","lastTransitionTime":"2026-01-30T21:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.895262 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.895338 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.895359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.895393 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.895412 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:40Z","lastTransitionTime":"2026-01-30T21:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.948755 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 02:50:03.102137635 +0000 UTC Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.999205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.999281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.999306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.999339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:40 crc kubenswrapper[4869]: I0130 21:44:40.999365 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:40Z","lastTransitionTime":"2026-01-30T21:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.102092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.102157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.102175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.102201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.102222 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:41Z","lastTransitionTime":"2026-01-30T21:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.236957 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.237038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.237064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.237126 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.237151 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:41Z","lastTransitionTime":"2026-01-30T21:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.340101 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.340152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.340163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.340181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.340194 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:41Z","lastTransitionTime":"2026-01-30T21:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.442830 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.442949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.442968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.442991 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.443036 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:41Z","lastTransitionTime":"2026-01-30T21:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.545461 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.545528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.545549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.545574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.545676 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:41Z","lastTransitionTime":"2026-01-30T21:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.649796 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.649863 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.649877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.649931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.649946 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:41Z","lastTransitionTime":"2026-01-30T21:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.754024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.754085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.754097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.754119 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.754133 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:41Z","lastTransitionTime":"2026-01-30T21:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.857562 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.857616 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.857634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.857658 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.857676 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:41Z","lastTransitionTime":"2026-01-30T21:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.875979 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.876055 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.876147 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.876343 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:41 crc kubenswrapper[4869]: E0130 21:44:41.876326 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:41 crc kubenswrapper[4869]: E0130 21:44:41.876529 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:41 crc kubenswrapper[4869]: E0130 21:44:41.876875 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:41 crc kubenswrapper[4869]: E0130 21:44:41.877076 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.949281 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 02:05:43.671481096 +0000 UTC Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.960816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.960878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.960924 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.960951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:41 crc kubenswrapper[4869]: I0130 21:44:41.960970 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:41Z","lastTransitionTime":"2026-01-30T21:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.064791 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.064853 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.064871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.064931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.064960 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:42Z","lastTransitionTime":"2026-01-30T21:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.168179 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.168248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.168265 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.168297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.168316 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:42Z","lastTransitionTime":"2026-01-30T21:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.272570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.272648 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.272674 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.272705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.272725 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:42Z","lastTransitionTime":"2026-01-30T21:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.375987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.376044 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.376066 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.376094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.376121 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:42Z","lastTransitionTime":"2026-01-30T21:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.481100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.481174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.481199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.481232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.481255 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:42Z","lastTransitionTime":"2026-01-30T21:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.584347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.584402 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.584420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.584453 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.584471 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:42Z","lastTransitionTime":"2026-01-30T21:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.688216 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.688272 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.688291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.688315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.688333 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:42Z","lastTransitionTime":"2026-01-30T21:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.791804 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.791879 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.791911 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.791931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.791944 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:42Z","lastTransitionTime":"2026-01-30T21:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.895382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.895429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.895442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.895460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.895474 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:42Z","lastTransitionTime":"2026-01-30T21:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.950299 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 17:04:25.810539714 +0000 UTC Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.998786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.998860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.998877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.998932 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:42 crc kubenswrapper[4869]: I0130 21:44:42.998953 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:42Z","lastTransitionTime":"2026-01-30T21:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.101673 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.101709 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.101718 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.101732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.101741 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:43Z","lastTransitionTime":"2026-01-30T21:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.206211 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.206290 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.206309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.206339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.206359 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:43Z","lastTransitionTime":"2026-01-30T21:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.310855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.310969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.310993 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.311025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.311046 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:43Z","lastTransitionTime":"2026-01-30T21:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.414316 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.414460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.414497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.414537 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.414569 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:43Z","lastTransitionTime":"2026-01-30T21:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.518831 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.518884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.518924 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.518947 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.518961 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:43Z","lastTransitionTime":"2026-01-30T21:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.622178 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.622245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.622265 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.622296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.622315 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:43Z","lastTransitionTime":"2026-01-30T21:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.707893 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:44:43 crc kubenswrapper[4869]: E0130 21:44:43.708377 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:47.708320069 +0000 UTC m=+148.594078134 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.726463 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.726543 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.726560 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.726588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.726605 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:43Z","lastTransitionTime":"2026-01-30T21:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.809049 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.809143 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.809193 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.809257 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:43 crc kubenswrapper[4869]: E0130 21:44:43.809468 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:44:43 crc kubenswrapper[4869]: E0130 21:44:43.809498 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:44:43 crc kubenswrapper[4869]: E0130 21:44:43.809518 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:44:43 crc kubenswrapper[4869]: E0130 21:44:43.809787 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:45:47.809761812 +0000 UTC m=+148.695519877 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:44:43 crc kubenswrapper[4869]: E0130 21:44:43.810156 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:44:43 crc kubenswrapper[4869]: E0130 21:44:43.810207 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:45:47.810193106 +0000 UTC m=+148.695951161 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:44:43 crc kubenswrapper[4869]: E0130 21:44:43.810256 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:44:43 crc kubenswrapper[4869]: E0130 21:44:43.810337 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:44:43 crc kubenswrapper[4869]: E0130 21:44:43.810377 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:44:43 crc kubenswrapper[4869]: E0130 21:44:43.810399 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:45:47.810360541 +0000 UTC m=+148.696118816 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:44:43 crc kubenswrapper[4869]: E0130 21:44:43.810401 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:44:43 crc kubenswrapper[4869]: E0130 21:44:43.810482 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:45:47.810461504 +0000 UTC m=+148.696219709 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.829953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.830029 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.830047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.830076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.830096 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:43Z","lastTransitionTime":"2026-01-30T21:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.877225 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.877234 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.877260 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:43 crc kubenswrapper[4869]: E0130 21:44:43.877428 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.877497 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:43 crc kubenswrapper[4869]: E0130 21:44:43.877858 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:43 crc kubenswrapper[4869]: E0130 21:44:43.878081 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:43 crc kubenswrapper[4869]: E0130 21:44:43.878190 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.878396 4869 scope.go:117] "RemoveContainer" containerID="81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.933279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.933366 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.933394 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.933428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.933446 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:43Z","lastTransitionTime":"2026-01-30T21:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:43 crc kubenswrapper[4869]: I0130 21:44:43.951283 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 12:57:49.599865303 +0000 UTC Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.037714 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.037776 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.037789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.037811 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.037826 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:44Z","lastTransitionTime":"2026-01-30T21:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.147750 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.147859 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.147962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.148002 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.148029 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:44Z","lastTransitionTime":"2026-01-30T21:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.251771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.252315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.252436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.252562 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.252848 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:44Z","lastTransitionTime":"2026-01-30T21:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.356500 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.356578 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.356604 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.356640 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.356665 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:44Z","lastTransitionTime":"2026-01-30T21:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.363084 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovnkube-controller/2.log" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.459868 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.459935 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.459947 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.459964 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.459979 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:44Z","lastTransitionTime":"2026-01-30T21:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.563074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.563416 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.563429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.563448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.563463 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:44Z","lastTransitionTime":"2026-01-30T21:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.665636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.665710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.665729 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.665756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.665777 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:44Z","lastTransitionTime":"2026-01-30T21:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.768798 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.768844 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.768860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.768877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.768887 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:44Z","lastTransitionTime":"2026-01-30T21:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.878241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.878284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.878297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.878314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.878327 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:44Z","lastTransitionTime":"2026-01-30T21:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.952594 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 20:01:35.664378489 +0000 UTC Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.981413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.981454 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.981469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.981487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:44 crc kubenswrapper[4869]: I0130 21:44:44.981500 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:44Z","lastTransitionTime":"2026-01-30T21:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.084119 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.084187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.084210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.084241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.084264 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:45Z","lastTransitionTime":"2026-01-30T21:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.188221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.188282 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.188299 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.188323 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.188337 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:45Z","lastTransitionTime":"2026-01-30T21:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.291647 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.291686 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.291699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.291719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.291732 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:45Z","lastTransitionTime":"2026-01-30T21:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.374860 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovnkube-controller/3.log" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.376284 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovnkube-controller/2.log" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.381276 4869 generic.go:334] "Generic (PLEG): container finished" podID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerID="92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21" exitCode=1 Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.381335 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerDied","Data":"92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21"} Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.381389 4869 scope.go:117] "RemoveContainer" containerID="81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.382822 4869 scope.go:117] "RemoveContainer" containerID="92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21" Jan 30 21:44:45 crc kubenswrapper[4869]: E0130 21:44:45.383321 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\"" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.395882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.395985 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.396000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.396024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.396040 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:45Z","lastTransitionTime":"2026-01-30T21:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.404799 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.429499 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.443721 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.455503 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.479016 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:18Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:44:18.748549 6628 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0130 21:44:18.748601 6628 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0130 21:44:18.748636 6628 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0130 21:44:18.748776 6628 factory.go:1336] Added *v1.Node event handler 7\\\\nI0130 21:44:18.748890 6628 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0130 21:44:18.749458 6628 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:44:18.749579 6628 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:44:18.749620 6628 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:44:18.749653 6628 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:44:18.749736 6628 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:44:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:45Z\\\",\\\"message\\\":\\\"network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0130 21:44:45.263101 7017 services_controller.go:356] Processing sync for service openshift-machine-config-operator/machine-config-operator for network=default\\\\nI0130 21:44:45.263181 7017 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0130 21:44:45.263209 7017 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.91078ms\\\\nI0130 21:44:45.263249 7017 services_controller.go:356] Processing sync for service openshift-multus/multus-admission-controller for network=default\\\\nF0130 21:44:45.263178 7017 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.494602 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.498874 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.498948 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.498968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.498994 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.499013 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:45Z","lastTransitionTime":"2026-01-30T21:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.509962 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.524413 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0cf51fe-c6b0-432d-934e-df3b70d5895c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7301698629901946ea3fa72f6c03cac1a88d253d9e14f4a8c225c8ff390fc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e817aca3610a40823460973541a6bb549f4d88d55919c93640ae8c6dc0874946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e817aca3610a40823460973541a6bb549f4d88d55919c93640ae8c6dc0874946\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.541025 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.556977 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.574371 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09e707217ddad89be77f68915c4948c1bbc2e44066f16cce7e255a5a91c1e101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:28Z\\\",\\\"message\\\":\\\"2026-01-30T21:43:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d7736fda-c03e-486c-aa38-7f904c4e5cbe\\\\n2026-01-30T21:43:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d7736fda-c03e-486c-aa38-7f904c4e5cbe to /host/opt/cni/bin/\\\\n2026-01-30T21:43:43Z [verbose] multus-daemon started\\\\n2026-01-30T21:43:43Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:44:28Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.588846 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.601386 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.601422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.601431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.601445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.601455 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:45Z","lastTransitionTime":"2026-01-30T21:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.606152 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.620993 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.637096 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.654518 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f947b452-4e15-43ce-a4f4-3ca13d10f8d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754edbe89075ed676689e09cc4aceaa7e45560a1db966e12c22a96be3bd3eba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68975f2095b7dcd1eaf59dd358c4ac1b731116418a8991290d4ee2d66611f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://568c346025fcd6f3b511f60206c6836f445c3e1be15135c064fed0507d20aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.671880 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.690023 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.704222 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.704288 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.704305 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.704335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.704352 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:45Z","lastTransitionTime":"2026-01-30T21:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.806766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.806809 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.806821 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.806838 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.806856 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:45Z","lastTransitionTime":"2026-01-30T21:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.876494 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:45 crc kubenswrapper[4869]: E0130 21:44:45.876640 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.876523 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.876501 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.876671 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:45 crc kubenswrapper[4869]: E0130 21:44:45.876874 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:45 crc kubenswrapper[4869]: E0130 21:44:45.876968 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:45 crc kubenswrapper[4869]: E0130 21:44:45.877063 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.908718 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.908759 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.908769 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.908786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.908797 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:45Z","lastTransitionTime":"2026-01-30T21:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:45 crc kubenswrapper[4869]: I0130 21:44:45.954572 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:01:23.632457917 +0000 UTC Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.011132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.011172 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.011181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.011195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.011203 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:46Z","lastTransitionTime":"2026-01-30T21:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.113732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.113785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.113796 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.113813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.113825 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:46Z","lastTransitionTime":"2026-01-30T21:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.215685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.215741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.215758 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.215781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.215798 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:46Z","lastTransitionTime":"2026-01-30T21:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.318697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.318748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.318760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.318778 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.318792 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:46Z","lastTransitionTime":"2026-01-30T21:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.385782 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovnkube-controller/3.log" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.420846 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.420889 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.420998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.421016 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.421029 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:46Z","lastTransitionTime":"2026-01-30T21:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.523663 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.523705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.523716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.523732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.523743 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:46Z","lastTransitionTime":"2026-01-30T21:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.626751 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.626814 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.626823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.626839 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.626849 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:46Z","lastTransitionTime":"2026-01-30T21:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.729398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.729442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.729455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.729471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.729482 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:46Z","lastTransitionTime":"2026-01-30T21:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.832421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.832479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.832494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.832513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.832524 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:46Z","lastTransitionTime":"2026-01-30T21:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.935735 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.935771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.935779 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.935793 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.935802 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:46Z","lastTransitionTime":"2026-01-30T21:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:46 crc kubenswrapper[4869]: I0130 21:44:46.955457 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:55:09.295437933 +0000 UTC Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.038965 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.039017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.039033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.039059 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.039075 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:47Z","lastTransitionTime":"2026-01-30T21:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.141423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.141519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.141537 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.141556 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.141568 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:47Z","lastTransitionTime":"2026-01-30T21:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.245504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.245598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.245612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.245633 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.245647 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:47Z","lastTransitionTime":"2026-01-30T21:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.348484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.348525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.348536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.348553 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.348565 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:47Z","lastTransitionTime":"2026-01-30T21:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.451852 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.451914 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.451927 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.451942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.451954 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:47Z","lastTransitionTime":"2026-01-30T21:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.555108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.555157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.555168 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.555182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.555191 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:47Z","lastTransitionTime":"2026-01-30T21:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.658199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.658238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.658268 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.658284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.658295 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:47Z","lastTransitionTime":"2026-01-30T21:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.672110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.672175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.672198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.672224 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.672240 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:47Z","lastTransitionTime":"2026-01-30T21:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:47 crc kubenswrapper[4869]: E0130 21:44:47.686783 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.691376 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.691472 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.691543 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.691574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.691646 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:47Z","lastTransitionTime":"2026-01-30T21:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:47 crc kubenswrapper[4869]: E0130 21:44:47.704566 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.712003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.712046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.712058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.712077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.712089 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:47Z","lastTransitionTime":"2026-01-30T21:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:47 crc kubenswrapper[4869]: E0130 21:44:47.725158 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.728995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.729035 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.729046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.729061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.729072 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:47Z","lastTransitionTime":"2026-01-30T21:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:47 crc kubenswrapper[4869]: E0130 21:44:47.742975 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.747844 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.747884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.747908 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.747930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.747943 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:47Z","lastTransitionTime":"2026-01-30T21:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:47 crc kubenswrapper[4869]: E0130 21:44:47.759023 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:47 crc kubenswrapper[4869]: E0130 21:44:47.759200 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.760992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.761059 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.761085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.761120 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.761148 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:47Z","lastTransitionTime":"2026-01-30T21:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.863684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.863732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.863746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.863761 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.863772 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:47Z","lastTransitionTime":"2026-01-30T21:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.876073 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.876118 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.876120 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:47 crc kubenswrapper[4869]: E0130 21:44:47.876201 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.876257 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:47 crc kubenswrapper[4869]: E0130 21:44:47.876360 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:47 crc kubenswrapper[4869]: E0130 21:44:47.876400 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:47 crc kubenswrapper[4869]: E0130 21:44:47.876462 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.956600 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 20:13:32.507785277 +0000 UTC Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.966371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.966418 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.966433 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.966452 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:47 crc kubenswrapper[4869]: I0130 21:44:47.966468 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:47Z","lastTransitionTime":"2026-01-30T21:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.069386 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.069442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.069458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.069477 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.069492 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:48Z","lastTransitionTime":"2026-01-30T21:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.172148 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.172209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.172239 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.172285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.172311 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:48Z","lastTransitionTime":"2026-01-30T21:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.275248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.275287 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.275299 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.275316 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.275327 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:48Z","lastTransitionTime":"2026-01-30T21:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.377929 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.377966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.377975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.377992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.378004 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:48Z","lastTransitionTime":"2026-01-30T21:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.480043 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.480082 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.480095 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.480111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.480122 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:48Z","lastTransitionTime":"2026-01-30T21:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.582799 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.582843 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.582854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.582871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.582882 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:48Z","lastTransitionTime":"2026-01-30T21:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.686134 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.686184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.686195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.686213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.686226 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:48Z","lastTransitionTime":"2026-01-30T21:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.788398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.788442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.788452 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.788467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.788478 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:48Z","lastTransitionTime":"2026-01-30T21:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.890792 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.890837 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.890850 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.890867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.890878 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:48Z","lastTransitionTime":"2026-01-30T21:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.957114 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 16:20:00.218329411 +0000 UTC Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.993486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.993517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.993525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.993539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:48 crc kubenswrapper[4869]: I0130 21:44:48.993552 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:48Z","lastTransitionTime":"2026-01-30T21:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.096388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.096425 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.096434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.096448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.096456 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:49Z","lastTransitionTime":"2026-01-30T21:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.198438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.198483 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.198493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.198508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.198517 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:49Z","lastTransitionTime":"2026-01-30T21:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.301391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.301426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.301438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.301454 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.301467 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:49Z","lastTransitionTime":"2026-01-30T21:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.404508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.404598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.404622 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.404653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.404677 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:49Z","lastTransitionTime":"2026-01-30T21:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.507192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.507229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.507240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.507255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.507264 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:49Z","lastTransitionTime":"2026-01-30T21:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.610177 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.610225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.610236 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.610252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.610262 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:49Z","lastTransitionTime":"2026-01-30T21:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.713253 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.713294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.713306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.713319 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.713329 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:49Z","lastTransitionTime":"2026-01-30T21:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.815106 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.815163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.815175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.815199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.815224 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:49Z","lastTransitionTime":"2026-01-30T21:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.876845 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:49 crc kubenswrapper[4869]: E0130 21:44:49.877112 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.877159 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.876854 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:49 crc kubenswrapper[4869]: E0130 21:44:49.877353 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.877381 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:49 crc kubenswrapper[4869]: E0130 21:44:49.877506 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:49 crc kubenswrapper[4869]: E0130 21:44:49.877647 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.892427 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0cf51fe-c6b0-432d-934e-df3b70d5895c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7301698629901946ea3fa72f6c03cac1a88d253d9e14f4a8c225c8ff390fc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e817aca3610a40823460973541a6bb549f4d88d55919c93640ae8c6dc0874946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e817aca3610a40823460973541a6bb549f4d88d55919c93640ae8c6dc0874946\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.902830 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.914061 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.921073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.921126 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.921137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.921154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.921169 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:49Z","lastTransitionTime":"2026-01-30T21:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.927977 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09e707217ddad89be77f68915c4948c1bbc2e44066f16cce7e255a5a91c1e101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:28Z\\\",\\\"message\\\":\\\"2026-01-30T21:43:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d7736fda-c03e-486c-aa38-7f904c4e5cbe\\\\n2026-01-30T21:43:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d7736fda-c03e-486c-aa38-7f904c4e5cbe to /host/opt/cni/bin/\\\\n2026-01-30T21:43:43Z [verbose] multus-daemon started\\\\n2026-01-30T21:43:43Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:44:28Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.945011 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:18Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:44:18.748549 6628 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0130 21:44:18.748601 6628 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0130 21:44:18.748636 6628 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0130 21:44:18.748776 6628 factory.go:1336] Added *v1.Node event handler 7\\\\nI0130 21:44:18.748890 6628 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0130 21:44:18.749458 6628 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:44:18.749579 6628 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:44:18.749620 6628 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:44:18.749653 6628 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:44:18.749736 6628 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:44:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:45Z\\\",\\\"message\\\":\\\"network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0130 21:44:45.263101 7017 services_controller.go:356] Processing sync for service openshift-machine-config-operator/machine-config-operator for network=default\\\\nI0130 21:44:45.263181 7017 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0130 21:44:45.263209 7017 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.91078ms\\\\nI0130 21:44:45.263249 7017 services_controller.go:356] Processing sync for service openshift-multus/multus-admission-controller for network=default\\\\nF0130 21:44:45.263178 7017 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.958150 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 14:22:56.175236348 +0000 UTC Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.958281 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.973326 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.984813 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:49 crc kubenswrapper[4869]: I0130 21:44:49.995332 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.007205 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.018807 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.023385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.023447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.023457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.023472 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.023482 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:50Z","lastTransitionTime":"2026-01-30T21:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.030810 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.040019 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.049874 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f947b452-4e15-43ce-a4f4-3ca13d10f8d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754edbe89075ed676689e09cc4aceaa7e45560a1db966e12c22a96be3bd3eba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68975f2095b7dcd1eaf59dd358c4ac1b731116418a8991290d4ee2d66611f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://568c346025fcd6f3b511f60206c6836f445c3e1be15135c064fed0507d20aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.063243 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.073565 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.084729 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.097858 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.125785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.125841 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.125852 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.125868 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.125878 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:50Z","lastTransitionTime":"2026-01-30T21:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.228587 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.228866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.228878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.228911 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.228925 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:50Z","lastTransitionTime":"2026-01-30T21:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.331116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.331152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.331188 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.331202 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.331211 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:50Z","lastTransitionTime":"2026-01-30T21:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.433462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.433504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.433513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.433528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.433537 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:50Z","lastTransitionTime":"2026-01-30T21:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.535686 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.535726 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.535736 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.535750 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.535762 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:50Z","lastTransitionTime":"2026-01-30T21:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.638221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.638261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.638270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.638285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.638294 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:50Z","lastTransitionTime":"2026-01-30T21:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.740664 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.740708 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.740717 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.740735 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.740746 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:50Z","lastTransitionTime":"2026-01-30T21:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.843836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.843888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.843919 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.843938 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.843954 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:50Z","lastTransitionTime":"2026-01-30T21:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.946376 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.946425 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.946434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.946450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.946461 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:50Z","lastTransitionTime":"2026-01-30T21:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:50 crc kubenswrapper[4869]: I0130 21:44:50.959033 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 03:33:49.358166405 +0000 UTC Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.048439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.048480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.048490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.048505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.048516 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:51Z","lastTransitionTime":"2026-01-30T21:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.151028 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.151101 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.151119 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.151148 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.151166 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:51Z","lastTransitionTime":"2026-01-30T21:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.257760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.257804 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.257812 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.257826 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.257836 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:51Z","lastTransitionTime":"2026-01-30T21:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.360665 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.360710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.360722 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.360738 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.360751 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:51Z","lastTransitionTime":"2026-01-30T21:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.462830 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.462926 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.462941 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.462957 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.462967 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:51Z","lastTransitionTime":"2026-01-30T21:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.565559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.565599 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.565607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.565621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.565631 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:51Z","lastTransitionTime":"2026-01-30T21:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.668662 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.668726 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.668743 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.668767 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.668785 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:51Z","lastTransitionTime":"2026-01-30T21:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.771292 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.771331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.771339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.771353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.771362 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:51Z","lastTransitionTime":"2026-01-30T21:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.873572 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.873604 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.873612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.873624 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.873633 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:51Z","lastTransitionTime":"2026-01-30T21:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.876136 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.876151 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.876215 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:51 crc kubenswrapper[4869]: E0130 21:44:51.876224 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.876333 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:51 crc kubenswrapper[4869]: E0130 21:44:51.876392 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:51 crc kubenswrapper[4869]: E0130 21:44:51.876352 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:51 crc kubenswrapper[4869]: E0130 21:44:51.876569 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.960063 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 15:16:00.153701085 +0000 UTC Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.976088 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.976129 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.976143 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.976160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:51 crc kubenswrapper[4869]: I0130 21:44:51.976171 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:51Z","lastTransitionTime":"2026-01-30T21:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.079391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.079446 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.079460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.079474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.079485 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:52Z","lastTransitionTime":"2026-01-30T21:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.181693 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.181750 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.181765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.181783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.181792 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:52Z","lastTransitionTime":"2026-01-30T21:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.284067 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.284111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.284122 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.284135 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.284145 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:52Z","lastTransitionTime":"2026-01-30T21:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.386053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.386111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.386128 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.386150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.386167 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:52Z","lastTransitionTime":"2026-01-30T21:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.488867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.488924 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.488936 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.488951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.488962 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:52Z","lastTransitionTime":"2026-01-30T21:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.591676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.591724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.591743 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.591765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.591781 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:52Z","lastTransitionTime":"2026-01-30T21:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.694810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.695034 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.695075 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.695116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.695143 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:52Z","lastTransitionTime":"2026-01-30T21:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.797806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.797845 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.797856 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.797871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.797881 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:52Z","lastTransitionTime":"2026-01-30T21:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.900585 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.900628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.900637 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.900651 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.900660 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:52Z","lastTransitionTime":"2026-01-30T21:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:52 crc kubenswrapper[4869]: I0130 21:44:52.960473 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 12:14:11.839491188 +0000 UTC Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.002653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.002704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.002716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.002734 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.002747 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:53Z","lastTransitionTime":"2026-01-30T21:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.104527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.104600 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.104610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.104623 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.104658 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:53Z","lastTransitionTime":"2026-01-30T21:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.207247 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.207289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.207297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.207317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.207326 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:53Z","lastTransitionTime":"2026-01-30T21:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.309930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.309971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.309980 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.309994 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.310004 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:53Z","lastTransitionTime":"2026-01-30T21:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.412312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.412364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.412383 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.412405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.412422 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:53Z","lastTransitionTime":"2026-01-30T21:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.514670 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.514714 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.514730 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.514751 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.514802 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:53Z","lastTransitionTime":"2026-01-30T21:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.617296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.617339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.617367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.617385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.617397 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:53Z","lastTransitionTime":"2026-01-30T21:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.719733 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.719776 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.719787 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.719802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.719815 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:53Z","lastTransitionTime":"2026-01-30T21:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.822423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.822479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.822491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.822508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.822519 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:53Z","lastTransitionTime":"2026-01-30T21:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.876447 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.876554 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.876611 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:53 crc kubenswrapper[4869]: E0130 21:44:53.876612 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.876610 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:53 crc kubenswrapper[4869]: E0130 21:44:53.877346 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:53 crc kubenswrapper[4869]: E0130 21:44:53.877729 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:53 crc kubenswrapper[4869]: E0130 21:44:53.878082 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.926083 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.926169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.926196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.926230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.926256 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:53Z","lastTransitionTime":"2026-01-30T21:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:53 crc kubenswrapper[4869]: I0130 21:44:53.961215 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:38:50.873818894 +0000 UTC Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.030679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.030792 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.030821 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.030860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.030929 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:54Z","lastTransitionTime":"2026-01-30T21:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.133701 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.133772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.133792 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.133820 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.133841 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:54Z","lastTransitionTime":"2026-01-30T21:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.237058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.237132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.237151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.237184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.237205 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:54Z","lastTransitionTime":"2026-01-30T21:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.340783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.340825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.340836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.340856 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.340866 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:54Z","lastTransitionTime":"2026-01-30T21:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.443598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.443652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.443667 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.443688 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.443705 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:54Z","lastTransitionTime":"2026-01-30T21:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.545810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.545867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.545877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.545912 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.545923 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:54Z","lastTransitionTime":"2026-01-30T21:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.648294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.648377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.648409 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.648443 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.648466 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:54Z","lastTransitionTime":"2026-01-30T21:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.750976 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.751019 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.751031 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.751046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.751055 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:54Z","lastTransitionTime":"2026-01-30T21:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.853524 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.853554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.853596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.853610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.853621 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:54Z","lastTransitionTime":"2026-01-30T21:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.955860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.955920 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.955933 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.955947 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.955959 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:54Z","lastTransitionTime":"2026-01-30T21:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:54 crc kubenswrapper[4869]: I0130 21:44:54.962191 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:09:43.779813765 +0000 UTC Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.058438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.058491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.058509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.058531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.058549 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:55Z","lastTransitionTime":"2026-01-30T21:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.161312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.161355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.161373 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.161396 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.161414 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:55Z","lastTransitionTime":"2026-01-30T21:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.263996 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.264037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.264047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.264063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.264073 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:55Z","lastTransitionTime":"2026-01-30T21:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.366213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.366265 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.366286 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.366307 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.366324 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:55Z","lastTransitionTime":"2026-01-30T21:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.468999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.469047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.469060 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.469081 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.469097 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:55Z","lastTransitionTime":"2026-01-30T21:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.571009 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.571042 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.571052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.571065 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.571074 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:55Z","lastTransitionTime":"2026-01-30T21:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.673718 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.673771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.673788 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.673811 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.673829 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:55Z","lastTransitionTime":"2026-01-30T21:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.776494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.776551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.776567 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.776589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.776606 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:55Z","lastTransitionTime":"2026-01-30T21:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.876971 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.877042 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.877051 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:55 crc kubenswrapper[4869]: E0130 21:44:55.877147 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.877164 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:55 crc kubenswrapper[4869]: E0130 21:44:55.877243 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:55 crc kubenswrapper[4869]: E0130 21:44:55.877372 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:55 crc kubenswrapper[4869]: E0130 21:44:55.877526 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.878633 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.878662 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.878673 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.878690 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.878701 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:55Z","lastTransitionTime":"2026-01-30T21:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.962859 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:12:35.995226805 +0000 UTC Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.981109 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.981156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.981170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.981190 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:55 crc kubenswrapper[4869]: I0130 21:44:55.981203 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:55Z","lastTransitionTime":"2026-01-30T21:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.082994 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.083062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.083078 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.083104 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.083120 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:56Z","lastTransitionTime":"2026-01-30T21:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.185212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.185268 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.185283 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.185308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.185323 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:56Z","lastTransitionTime":"2026-01-30T21:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.288041 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.288090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.288104 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.288125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.288139 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:56Z","lastTransitionTime":"2026-01-30T21:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.390739 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.390776 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.390789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.390804 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.390815 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:56Z","lastTransitionTime":"2026-01-30T21:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.493101 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.493145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.493157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.493221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.493236 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:56Z","lastTransitionTime":"2026-01-30T21:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.595226 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.595268 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.595278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.595293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.595303 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:56Z","lastTransitionTime":"2026-01-30T21:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.697921 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.697955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.697963 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.697978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.697987 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:56Z","lastTransitionTime":"2026-01-30T21:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.800524 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.800569 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.800582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.800601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.800613 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:56Z","lastTransitionTime":"2026-01-30T21:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.903359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.903405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.903417 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.903438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.903452 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:56Z","lastTransitionTime":"2026-01-30T21:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:56 crc kubenswrapper[4869]: I0130 21:44:56.963243 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 21:39:05.177049455 +0000 UTC Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.005118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.005154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.005163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.005177 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.005189 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:57Z","lastTransitionTime":"2026-01-30T21:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.107454 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.107491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.107499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.107513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.107523 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:57Z","lastTransitionTime":"2026-01-30T21:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.210134 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.210184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.210199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.210217 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.210228 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:57Z","lastTransitionTime":"2026-01-30T21:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.312423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.312489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.312507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.312529 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.312548 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:57Z","lastTransitionTime":"2026-01-30T21:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.414871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.414975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.414988 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.415004 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.415014 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:57Z","lastTransitionTime":"2026-01-30T21:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.518478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.518516 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.518527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.518548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.518559 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:57Z","lastTransitionTime":"2026-01-30T21:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.621464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.621505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.621516 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.621532 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.621542 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:57Z","lastTransitionTime":"2026-01-30T21:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.723512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.723541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.723548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.723560 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.723570 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:57Z","lastTransitionTime":"2026-01-30T21:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.825581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.825609 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.825619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.825635 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.825645 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:57Z","lastTransitionTime":"2026-01-30T21:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.877116 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.877163 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:57 crc kubenswrapper[4869]: E0130 21:44:57.877280 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.877368 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:57 crc kubenswrapper[4869]: E0130 21:44:57.877498 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.877520 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:57 crc kubenswrapper[4869]: E0130 21:44:57.877571 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:57 crc kubenswrapper[4869]: E0130 21:44:57.877693 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.927836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.927881 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.927913 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.927933 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.927946 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:57Z","lastTransitionTime":"2026-01-30T21:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.963725 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 22:29:26.158960768 +0000 UTC Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.970238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.970268 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.970288 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.970307 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.970318 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:57Z","lastTransitionTime":"2026-01-30T21:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:57 crc kubenswrapper[4869]: E0130 21:44:57.982927 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.986461 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.986505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.986513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.986525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:57 crc kubenswrapper[4869]: I0130 21:44:57.986536 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:57Z","lastTransitionTime":"2026-01-30T21:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:58 crc kubenswrapper[4869]: E0130 21:44:58.000121 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.003817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.003870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.003884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.003943 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.003965 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:58Z","lastTransitionTime":"2026-01-30T21:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:58 crc kubenswrapper[4869]: E0130 21:44:58.017558 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:58Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.020704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.020734 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.020745 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.020791 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.020803 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:58Z","lastTransitionTime":"2026-01-30T21:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:58 crc kubenswrapper[4869]: E0130 21:44:58.031439 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:58Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.034943 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.034981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.034993 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.035010 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.035025 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:58Z","lastTransitionTime":"2026-01-30T21:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:58 crc kubenswrapper[4869]: E0130 21:44:58.045871 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"eed4c80a-2486-4f50-8ae9-4ddbd620d70e\\\",\\\"systemUUID\\\":\\\"073254b5-c7c0-49f1-bed8-4438b0f03db1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:58Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:58 crc kubenswrapper[4869]: E0130 21:44:58.046006 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.047105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.047130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.047138 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.047151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.047161 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:58Z","lastTransitionTime":"2026-01-30T21:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.150091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.150145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.150162 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.150185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.150208 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:58Z","lastTransitionTime":"2026-01-30T21:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.252270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.252308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.252328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.252345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.252360 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:58Z","lastTransitionTime":"2026-01-30T21:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.354690 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.354725 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.354733 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.354748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.354757 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:58Z","lastTransitionTime":"2026-01-30T21:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.457257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.457299 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.457311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.457326 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.457336 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:58Z","lastTransitionTime":"2026-01-30T21:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.559874 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.559931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.559940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.559953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.559963 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:58Z","lastTransitionTime":"2026-01-30T21:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.571504 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs\") pod \"network-metrics-daemon-45w6p\" (UID: \"b980f4db-64d3-48c9-9ff8-18f23c4888cd\") " pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:58 crc kubenswrapper[4869]: E0130 21:44:58.571649 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:44:58 crc kubenswrapper[4869]: E0130 21:44:58.571700 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs podName:b980f4db-64d3-48c9-9ff8-18f23c4888cd nodeName:}" failed. No retries permitted until 2026-01-30 21:46:02.571684819 +0000 UTC m=+163.457442844 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs") pod "network-metrics-daemon-45w6p" (UID: "b980f4db-64d3-48c9-9ff8-18f23c4888cd") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.662263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.662312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.662323 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.662339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.662357 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:58Z","lastTransitionTime":"2026-01-30T21:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.765140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.765181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.765194 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.765211 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.765225 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:58Z","lastTransitionTime":"2026-01-30T21:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.867858 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.867926 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.867935 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.867950 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.867959 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:58Z","lastTransitionTime":"2026-01-30T21:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.964881 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 07:10:07.244776369 +0000 UTC Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.970577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.970617 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.970628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.970643 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:58 crc kubenswrapper[4869]: I0130 21:44:58.970656 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:58Z","lastTransitionTime":"2026-01-30T21:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.072830 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.072867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.072885 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.072930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.072944 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:59Z","lastTransitionTime":"2026-01-30T21:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.175197 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.175235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.175244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.175257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.175267 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:59Z","lastTransitionTime":"2026-01-30T21:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.277474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.277512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.277524 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.277540 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.277551 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:59Z","lastTransitionTime":"2026-01-30T21:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.380092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.380134 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.380144 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.380157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.380167 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:59Z","lastTransitionTime":"2026-01-30T21:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.482296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.482323 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.482332 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.482349 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.482361 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:59Z","lastTransitionTime":"2026-01-30T21:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.584721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.584767 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.584777 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.584791 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.584801 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:59Z","lastTransitionTime":"2026-01-30T21:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.687694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.687770 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.687791 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.687820 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.687838 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:59Z","lastTransitionTime":"2026-01-30T21:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.790979 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.791019 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.791029 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.791045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.791055 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:59Z","lastTransitionTime":"2026-01-30T21:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.876507 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.876579 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:44:59 crc kubenswrapper[4869]: E0130 21:44:59.876679 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:44:59 crc kubenswrapper[4869]: E0130 21:44:59.876793 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.876512 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:44:59 crc kubenswrapper[4869]: E0130 21:44:59.877182 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.877550 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:44:59 crc kubenswrapper[4869]: E0130 21:44:59.877686 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.889644 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.893720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.893790 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.893811 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.893838 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.893859 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:59Z","lastTransitionTime":"2026-01-30T21:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.903284 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0cf51fe-c6b0-432d-934e-df3b70d5895c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7301698629901946ea3fa72f6c03cac1a88d253d9e14f4a8c225c8ff390fc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e817aca3610a40823460973541a6bb549f4d88d55919c93640ae8c6dc0874946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e817aca3610a40823460973541a6bb549f4d88d55919c93640ae8c6dc0874946\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.917175 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.934193 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.953184 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09e707217ddad89be77f68915c4948c1bbc2e44066f16cce7e255a5a91c1e101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:28Z\\\",\\\"message\\\":\\\"2026-01-30T21:43:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d7736fda-c03e-486c-aa38-7f904c4e5cbe\\\\n2026-01-30T21:43:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d7736fda-c03e-486c-aa38-7f904c4e5cbe to /host/opt/cni/bin/\\\\n2026-01-30T21:43:43Z [verbose] multus-daemon started\\\\n2026-01-30T21:43:43Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:44:28Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.965087 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 11:02:13.839111329 +0000 UTC Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.975932 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81c545886305c472b5c9ce68265f9d0fc2177184916b45135691b0413eb72c0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:18Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:44:18.748549 6628 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0130 21:44:18.748601 6628 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0130 21:44:18.748636 6628 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0130 21:44:18.748776 6628 factory.go:1336] Added *v1.Node event handler 7\\\\nI0130 21:44:18.748890 6628 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0130 21:44:18.749458 6628 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:44:18.749579 6628 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:44:18.749620 6628 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:44:18.749653 6628 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 21:44:18.749736 6628 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:44:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:45Z\\\",\\\"message\\\":\\\"network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0130 21:44:45.263101 7017 services_controller.go:356] Processing sync for service openshift-machine-config-operator/machine-config-operator for network=default\\\\nI0130 21:44:45.263181 7017 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0130 21:44:45.263209 7017 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.91078ms\\\\nI0130 21:44:45.263249 7017 services_controller.go:356] Processing sync for service openshift-multus/multus-admission-controller for network=default\\\\nF0130 21:44:45.263178 7017 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.991716 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:44:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.996528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.996586 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.996608 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.996636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:44:59 crc kubenswrapper[4869]: I0130 21:44:59.996659 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:44:59Z","lastTransitionTime":"2026-01-30T21:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.006369 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.021075 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.039478 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.057018 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.072249 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f947b452-4e15-43ce-a4f4-3ca13d10f8d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754edbe89075ed676689e09cc4aceaa7e45560a1db966e12c22a96be3bd3eba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68975f2095b7dcd1eaf59dd358c4ac1b731116418a8991290d4ee2d66611f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://568c346025fcd6f3b511f60206c6836f445c3e1be15135c064fed0507d20aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.093202 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.099843 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.099931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.099951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.099976 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.099995 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:00Z","lastTransitionTime":"2026-01-30T21:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.105910 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.126449 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.145270 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.156197 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.171423 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.203510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.203559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.203571 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.203594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.203611 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:00Z","lastTransitionTime":"2026-01-30T21:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.307063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.307157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.307182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.307219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.307243 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:00Z","lastTransitionTime":"2026-01-30T21:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.410049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.410146 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.410205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.410287 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.410336 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:00Z","lastTransitionTime":"2026-01-30T21:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.513628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.513682 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.513693 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.513710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.513722 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:00Z","lastTransitionTime":"2026-01-30T21:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.617231 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.617289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.617302 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.617323 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.617337 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:00Z","lastTransitionTime":"2026-01-30T21:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.721363 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.721414 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.721425 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.721443 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.721453 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:00Z","lastTransitionTime":"2026-01-30T21:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.825043 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.825098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.825109 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.825121 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.825130 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:00Z","lastTransitionTime":"2026-01-30T21:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.877219 4869 scope.go:117] "RemoveContainer" containerID="92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21" Jan 30 21:45:00 crc kubenswrapper[4869]: E0130 21:45:00.877386 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\"" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.897652 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f947b452-4e15-43ce-a4f4-3ca13d10f8d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://754edbe89075ed676689e09cc4aceaa7e45560a1db966e12c22a96be3bd3eba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68975f2095b7dcd1eaf59dd358c4ac1b731116418a8991290d4ee2d66611f2d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://568c346025fcd6f3b511f60206c6836f445c3e1be15135c064fed0507d20aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07626e9fc038e2c00d6751a0660aedfe5e66e6f735999b30cbffb7d6f8ff7011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.919068 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.928011 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.928051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.928062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.928080 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.928092 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:00Z","lastTransitionTime":"2026-01-30T21:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.934473 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v9n4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ca98e-63b4-4337-af06-7525b62333b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c53920051c0c7658e9ffebac1382950ffea94ead520f2ca1b04cb6a0f9e15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rk245\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v9n4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.957155 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.966656 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 08:52:30.582709943 +0000 UTC Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.978074 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd656f9-7188-4998-8195-c2ec92442b7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeed7a751cd65072ffc31d4db0396578155d50518ad9dae28ced02985d1e4c62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0df602a47c08ba881b31b090feb4ab69c0a78d75131eaa0db3ead6bd785538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec3f7e8289b0bde4a10d332aeb90bc0e754559ed8e0d2f758687cd14b0c72a9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6a93893766b374bcd6cd728c8d744e4009ee15dd5083958fb5c8bad8d84a68e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db77134513276a1ba7d9c5bea0457cdb817165ccb556c8d93a1bccf6196882f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17debd4740c520dca6d67ffb372e2e3ec8992edb13e8084c9563d52a23388ae1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fa376c1cfdc2302d4eedfcbdea36872d7ed5e4246a1e763125899d8e9f7e217\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ng79r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jdbl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:00 crc kubenswrapper[4869]: I0130 21:45:00.992159 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c24fb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a9e9fe8-01db-40b0-bdd8-e3d626df037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c97e257984df620e4f118db44cb63e7748a6d62887c3904f1a2a0641991602d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p2pd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c24fb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.009932 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-45w6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b980f4db-64d3-48c9-9ff8-18f23c4888cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf95q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-45w6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.031505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.031568 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.031583 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.031602 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.031614 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:01Z","lastTransitionTime":"2026-01-30T21:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.044166 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c39d4fe5-06cd-4ea4-8336-bd481332c475\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:45Z\\\",\\\"message\\\":\\\"network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0130 21:44:45.263101 7017 services_controller.go:356] Processing sync for service openshift-machine-config-operator/machine-config-operator for network=default\\\\nI0130 21:44:45.263181 7017 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0130 21:44:45.263209 7017 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.91078ms\\\\nI0130 21:44:45.263249 7017 services_controller.go:356] Processing sync for service openshift-multus/multus-admission-controller for network=default\\\\nF0130 21:44:45.263178 7017 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:44:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-stqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.060085 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e92fcbf8-b6b4-4531-a7cf-ed59225dd821\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43fd4613a6d7e2360584cfe98ccaa73d8ed65139ba3c249aeb716b4303d170eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c73341f991fc768d5dcc57229c33204b0ade5469b5b9d7a970f38b97c3d6696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tsq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qnlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.088021 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a48f728-068e-4f6e-9794-12d375245df1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0130 21:43:39.192174 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 21:43:39.192295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:43:39.192865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2603741252/tls.crt::/tmp/serving-cert-2603741252/tls.key\\\\\\\"\\\\nI0130 21:43:39.581448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:43:39.592260 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:43:39.592286 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:43:39.592313 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:43:39.592318 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:43:39.598780 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:43:39.598801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:43:39.598811 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:43:39.598815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:43:39.598819 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:43:39.598822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:43:39.598829 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:43:39.601912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.105169 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0cf51fe-c6b0-432d-934e-df3b70d5895c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7301698629901946ea3fa72f6c03cac1a88d253d9e14f4a8c225c8ff390fc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e817aca3610a40823460973541a6bb549f4d88d55919c93640ae8c6dc0874946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e817aca3610a40823460973541a6bb549f4d88d55919c93640ae8c6dc0874946\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:43:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.125267 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e768b90cb0dae0e7c594e45deebf9178321fecc14339fa1e49769e13c950ff9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.134288 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.134339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.134350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.134367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.134378 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:01Z","lastTransitionTime":"2026-01-30T21:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.140253 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6fc0664-5e80-440d-a6e8-4189cdf5c500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a73c5cac1fdcb8d81bccdb2eea7b34e75752439d7871493f4afdd0325f02b2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zqhgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vzgdv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.157238 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tz8jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac3c503-e284-4df8-ae5e-0084a884e456\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09e707217ddad89be77f68915c4948c1bbc2e44066f16cce7e255a5a91c1e101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:44:28Z\\\",\\\"message\\\":\\\"2026-01-30T21:43:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d7736fda-c03e-486c-aa38-7f904c4e5cbe\\\\n2026-01-30T21:43:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d7736fda-c03e-486c-aa38-7f904c4e5cbe to /host/opt/cni/bin/\\\\n2026-01-30T21:43:43Z [verbose] multus-daemon started\\\\n2026-01-30T21:43:43Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:44:28Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:43:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgb6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tz8jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.174325 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13da86d5-c968-4e20-88dd-86b52c6402a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5f07c14a94021761f655ab66d0782a878dacdacbe9abc2c02db132e5f5721a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267daf399d26dd0253eed95c94b44bd4299990a14bb28159ddfd1f324dec6192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79fd36ca3590a916ec08c8ae4d3b410eb11261c6a5295a4274119f93e08079cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:43:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.196034 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.211427 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b14ccc4339f1670442e8d9c16039fb3607ed2097daf0e58291333a9d566dc02e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.226703 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:43:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a98ef3151784bd76a916ca4dc6d85c25c10bf0d9eca4444780ceb2c826e439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7741a57e2e55c2f88115cf30643e5c4cd87ab58081677cccdd027bf1307a7a83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:45:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.237780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.237879 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.237945 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.237974 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.238019 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:01Z","lastTransitionTime":"2026-01-30T21:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.341566 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.341638 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.341657 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.341691 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.341712 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:01Z","lastTransitionTime":"2026-01-30T21:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.445402 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.445450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.445463 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.445479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.445491 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:01Z","lastTransitionTime":"2026-01-30T21:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.549582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.549657 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.549685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.549720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.549747 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:01Z","lastTransitionTime":"2026-01-30T21:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.652059 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.652113 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.652122 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.652139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.652148 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:01Z","lastTransitionTime":"2026-01-30T21:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.754840 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.754874 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.754882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.754920 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.754936 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:01Z","lastTransitionTime":"2026-01-30T21:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.857494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.857543 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.857554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.857573 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.857581 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:01Z","lastTransitionTime":"2026-01-30T21:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.876081 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.876109 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:01 crc kubenswrapper[4869]: E0130 21:45:01.876209 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.876240 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.876317 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:01 crc kubenswrapper[4869]: E0130 21:45:01.876401 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:01 crc kubenswrapper[4869]: E0130 21:45:01.876534 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:01 crc kubenswrapper[4869]: E0130 21:45:01.876632 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.960295 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.960352 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.960367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.960390 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.960402 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:01Z","lastTransitionTime":"2026-01-30T21:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:01 crc kubenswrapper[4869]: I0130 21:45:01.967493 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:18:56.910216826 +0000 UTC Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.062857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.062915 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.062926 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.062941 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.062951 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:02Z","lastTransitionTime":"2026-01-30T21:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.166263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.166354 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.166377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.166411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.166434 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:02Z","lastTransitionTime":"2026-01-30T21:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.270177 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.270248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.270264 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.270280 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.270296 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:02Z","lastTransitionTime":"2026-01-30T21:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.373024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.373069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.373081 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.373099 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.373109 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:02Z","lastTransitionTime":"2026-01-30T21:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.477007 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.477063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.477077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.477102 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.477120 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:02Z","lastTransitionTime":"2026-01-30T21:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.580098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.580175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.580207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.580223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.580235 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:02Z","lastTransitionTime":"2026-01-30T21:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.684623 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.684681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.684699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.684726 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.684744 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:02Z","lastTransitionTime":"2026-01-30T21:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.788325 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.788387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.788400 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.788419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.788435 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:02Z","lastTransitionTime":"2026-01-30T21:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.890376 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.890619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.890629 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.890643 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.890652 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:02Z","lastTransitionTime":"2026-01-30T21:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.967983 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:41:31.247023754 +0000 UTC Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.994492 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.994603 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.994624 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.994684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:02 crc kubenswrapper[4869]: I0130 21:45:02.994705 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:02Z","lastTransitionTime":"2026-01-30T21:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.097089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.097149 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.097159 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.097173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.097183 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:03Z","lastTransitionTime":"2026-01-30T21:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.198994 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.199025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.199032 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.199046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.199054 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:03Z","lastTransitionTime":"2026-01-30T21:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.301623 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.301664 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.301677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.301692 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.301704 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:03Z","lastTransitionTime":"2026-01-30T21:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.404073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.404110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.404122 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.404137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.404147 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:03Z","lastTransitionTime":"2026-01-30T21:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.506820 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.506855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.506863 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.506876 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.506885 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:03Z","lastTransitionTime":"2026-01-30T21:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.608832 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.608869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.608880 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.608919 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.608938 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:03Z","lastTransitionTime":"2026-01-30T21:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.711388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.711429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.711439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.711452 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.711460 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:03Z","lastTransitionTime":"2026-01-30T21:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.813739 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.813780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.813791 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.813805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.813816 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:03Z","lastTransitionTime":"2026-01-30T21:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.876497 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.876520 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.876571 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:03 crc kubenswrapper[4869]: E0130 21:45:03.876605 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.876601 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:03 crc kubenswrapper[4869]: E0130 21:45:03.876658 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:03 crc kubenswrapper[4869]: E0130 21:45:03.876720 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:03 crc kubenswrapper[4869]: E0130 21:45:03.876790 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.915862 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.915917 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.915929 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.915943 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.915954 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:03Z","lastTransitionTime":"2026-01-30T21:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:03 crc kubenswrapper[4869]: I0130 21:45:03.968938 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 04:18:10.834103501 +0000 UTC Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.018104 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.018137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.018147 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.018158 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.018168 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:04Z","lastTransitionTime":"2026-01-30T21:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.120570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.120610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.120623 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.120640 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.120651 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:04Z","lastTransitionTime":"2026-01-30T21:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.222964 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.223010 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.223029 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.223047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.223058 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:04Z","lastTransitionTime":"2026-01-30T21:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.325556 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.325607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.325619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.325637 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.325649 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:04Z","lastTransitionTime":"2026-01-30T21:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.427554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.427597 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.427609 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.427626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.427636 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:04Z","lastTransitionTime":"2026-01-30T21:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.529925 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.529971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.529985 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.530001 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.530014 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:04Z","lastTransitionTime":"2026-01-30T21:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.632233 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.632262 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.632274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.632287 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.632297 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:04Z","lastTransitionTime":"2026-01-30T21:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.734553 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.734593 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.734603 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.734616 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.734626 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:04Z","lastTransitionTime":"2026-01-30T21:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.836566 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.836608 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.836620 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.836636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.836647 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:04Z","lastTransitionTime":"2026-01-30T21:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.938987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.939027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.939039 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.939058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.939072 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:04Z","lastTransitionTime":"2026-01-30T21:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:04 crc kubenswrapper[4869]: I0130 21:45:04.969595 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 10:28:58.599203561 +0000 UTC Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.041140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.041210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.041228 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.041252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.041270 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:05Z","lastTransitionTime":"2026-01-30T21:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.086831 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.087669 4869 scope.go:117] "RemoveContainer" containerID="92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21" Jan 30 21:45:05 crc kubenswrapper[4869]: E0130 21:45:05.087806 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\"" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.143269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.143344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.143358 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.143375 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.143388 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:05Z","lastTransitionTime":"2026-01-30T21:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.246133 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.246177 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.246187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.246204 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.246215 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:05Z","lastTransitionTime":"2026-01-30T21:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.349463 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.349519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.349536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.349558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.349578 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:05Z","lastTransitionTime":"2026-01-30T21:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.453011 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.453052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.453064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.453077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.453086 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:05Z","lastTransitionTime":"2026-01-30T21:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.555543 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.555604 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.555621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.555645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.555663 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:05Z","lastTransitionTime":"2026-01-30T21:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.658259 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.658310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.658322 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.658338 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.658348 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:05Z","lastTransitionTime":"2026-01-30T21:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.760372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.760433 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.760457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.760486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.760509 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:05Z","lastTransitionTime":"2026-01-30T21:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.863847 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.863891 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.863922 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.863939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.863951 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:05Z","lastTransitionTime":"2026-01-30T21:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.876082 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.876321 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:05 crc kubenswrapper[4869]: E0130 21:45:05.876507 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.876533 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:05 crc kubenswrapper[4869]: E0130 21:45:05.876786 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:05 crc kubenswrapper[4869]: E0130 21:45:05.876856 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.877299 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:05 crc kubenswrapper[4869]: E0130 21:45:05.877518 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.891220 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.967506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.967588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.967610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.967642 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.967664 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:05Z","lastTransitionTime":"2026-01-30T21:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:05 crc kubenswrapper[4869]: I0130 21:45:05.970642 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 23:56:04.313088337 +0000 UTC Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.071329 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.071379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.071388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.071465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.071476 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:06Z","lastTransitionTime":"2026-01-30T21:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.174105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.174162 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.174172 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.174200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.174220 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:06Z","lastTransitionTime":"2026-01-30T21:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.278290 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.278367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.278387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.278420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.278442 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:06Z","lastTransitionTime":"2026-01-30T21:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.381663 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.381757 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.381777 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.381810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.381837 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:06Z","lastTransitionTime":"2026-01-30T21:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.485069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.485106 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.485115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.485131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.485140 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:06Z","lastTransitionTime":"2026-01-30T21:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.587328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.587378 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.587391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.587407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.587419 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:06Z","lastTransitionTime":"2026-01-30T21:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.690212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.690251 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.690262 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.690278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.690289 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:06Z","lastTransitionTime":"2026-01-30T21:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.792049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.792089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.792099 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.792117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.792136 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:06Z","lastTransitionTime":"2026-01-30T21:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.894629 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.894666 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.894676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.894688 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.894696 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:06Z","lastTransitionTime":"2026-01-30T21:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.970783 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 06:52:24.535416922 +0000 UTC Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.996558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.996589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.996596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.996608 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:06 crc kubenswrapper[4869]: I0130 21:45:06.996616 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:06Z","lastTransitionTime":"2026-01-30T21:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.098652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.098725 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.098744 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.098770 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.098789 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:07Z","lastTransitionTime":"2026-01-30T21:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.201757 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.201785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.201793 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.201805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.201814 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:07Z","lastTransitionTime":"2026-01-30T21:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.304855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.304979 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.305005 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.305033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.305053 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:07Z","lastTransitionTime":"2026-01-30T21:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.409441 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.409487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.409500 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.409515 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.409523 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:07Z","lastTransitionTime":"2026-01-30T21:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.513010 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.513060 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.513068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.513091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.513101 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:07Z","lastTransitionTime":"2026-01-30T21:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.616122 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.616192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.616210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.616236 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.616253 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:07Z","lastTransitionTime":"2026-01-30T21:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.720241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.720329 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.720356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.720391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.720458 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:07Z","lastTransitionTime":"2026-01-30T21:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.823132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.823176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.823186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.823200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.823211 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:07Z","lastTransitionTime":"2026-01-30T21:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.875774 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.875811 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:07 crc kubenswrapper[4869]: E0130 21:45:07.875962 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.875999 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.876034 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:07 crc kubenswrapper[4869]: E0130 21:45:07.876151 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:07 crc kubenswrapper[4869]: E0130 21:45:07.876409 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:07 crc kubenswrapper[4869]: E0130 21:45:07.876413 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.924887 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.925023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.925044 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.925064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.925080 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:07Z","lastTransitionTime":"2026-01-30T21:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:07 crc kubenswrapper[4869]: I0130 21:45:07.971168 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 06:57:19.791660738 +0000 UTC Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.027602 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.027667 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.027692 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.027718 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.027736 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:08Z","lastTransitionTime":"2026-01-30T21:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.130703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.130755 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.130774 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.130800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.130817 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:08Z","lastTransitionTime":"2026-01-30T21:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.233332 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.233376 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.233385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.233400 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.233410 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:08Z","lastTransitionTime":"2026-01-30T21:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.336617 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.337144 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.337235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.337404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.337501 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:08Z","lastTransitionTime":"2026-01-30T21:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.408342 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.408432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.408472 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.408507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.408532 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:45:08Z","lastTransitionTime":"2026-01-30T21:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.483374 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp"] Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.484211 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.488423 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.488753 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.488796 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.489271 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.528997 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.528962331 podStartE2EDuration="1m29.528962331s" podCreationTimestamp="2026-01-30 21:43:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:08.515032011 +0000 UTC m=+109.400790036" watchObservedRunningTime="2026-01-30 21:45:08.528962331 +0000 UTC m=+109.414720396" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.580084 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b4cd3d77-6b24-4ebf-bce5-f1297dc178ec-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cxvtp\" (UID: \"b4cd3d77-6b24-4ebf-bce5-f1297dc178ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.580132 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b4cd3d77-6b24-4ebf-bce5-f1297dc178ec-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cxvtp\" (UID: \"b4cd3d77-6b24-4ebf-bce5-f1297dc178ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.580150 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b4cd3d77-6b24-4ebf-bce5-f1297dc178ec-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cxvtp\" (UID: \"b4cd3d77-6b24-4ebf-bce5-f1297dc178ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.580224 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4cd3d77-6b24-4ebf-bce5-f1297dc178ec-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cxvtp\" (UID: \"b4cd3d77-6b24-4ebf-bce5-f1297dc178ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.580252 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4cd3d77-6b24-4ebf-bce5-f1297dc178ec-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cxvtp\" (UID: \"b4cd3d77-6b24-4ebf-bce5-f1297dc178ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.598308 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=57.598281253 podStartE2EDuration="57.598281253s" podCreationTimestamp="2026-01-30 21:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:08.597817549 +0000 UTC m=+109.483575584" watchObservedRunningTime="2026-01-30 21:45:08.598281253 +0000 UTC m=+109.484039288" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.644848 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-v9n4p" podStartSLOduration=88.64480949 podStartE2EDuration="1m28.64480949s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:08.631381936 +0000 UTC m=+109.517139961" watchObservedRunningTime="2026-01-30 21:45:08.64480949 +0000 UTC m=+109.530567515" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.667228 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jdbl9" podStartSLOduration=88.667203402 podStartE2EDuration="1m28.667203402s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:08.667103549 +0000 UTC m=+109.552861584" watchObservedRunningTime="2026-01-30 21:45:08.667203402 +0000 UTC m=+109.552961427" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.680615 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-c24fb" podStartSLOduration=88.680585255 podStartE2EDuration="1m28.680585255s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:08.680541593 +0000 UTC m=+109.566299628" watchObservedRunningTime="2026-01-30 21:45:08.680585255 +0000 UTC m=+109.566343280" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.680755 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4cd3d77-6b24-4ebf-bce5-f1297dc178ec-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cxvtp\" (UID: \"b4cd3d77-6b24-4ebf-bce5-f1297dc178ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.681293 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4cd3d77-6b24-4ebf-bce5-f1297dc178ec-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cxvtp\" (UID: \"b4cd3d77-6b24-4ebf-bce5-f1297dc178ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.681356 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b4cd3d77-6b24-4ebf-bce5-f1297dc178ec-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cxvtp\" (UID: \"b4cd3d77-6b24-4ebf-bce5-f1297dc178ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.681379 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b4cd3d77-6b24-4ebf-bce5-f1297dc178ec-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cxvtp\" (UID: \"b4cd3d77-6b24-4ebf-bce5-f1297dc178ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.681410 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b4cd3d77-6b24-4ebf-bce5-f1297dc178ec-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cxvtp\" (UID: \"b4cd3d77-6b24-4ebf-bce5-f1297dc178ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.681518 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b4cd3d77-6b24-4ebf-bce5-f1297dc178ec-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cxvtp\" (UID: \"b4cd3d77-6b24-4ebf-bce5-f1297dc178ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.681558 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b4cd3d77-6b24-4ebf-bce5-f1297dc178ec-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cxvtp\" (UID: \"b4cd3d77-6b24-4ebf-bce5-f1297dc178ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.682246 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b4cd3d77-6b24-4ebf-bce5-f1297dc178ec-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cxvtp\" (UID: \"b4cd3d77-6b24-4ebf-bce5-f1297dc178ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.686925 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4cd3d77-6b24-4ebf-bce5-f1297dc178ec-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cxvtp\" (UID: \"b4cd3d77-6b24-4ebf-bce5-f1297dc178ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.701726 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4cd3d77-6b24-4ebf-bce5-f1297dc178ec-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cxvtp\" (UID: \"b4cd3d77-6b24-4ebf-bce5-f1297dc178ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.711952 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qnlzn" podStartSLOduration=88.711868101 podStartE2EDuration="1m28.711868101s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:08.71150983 +0000 UTC m=+109.597267855" watchObservedRunningTime="2026-01-30 21:45:08.711868101 +0000 UTC m=+109.597626156" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.733830 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.733798889 podStartE2EDuration="1m29.733798889s" podCreationTimestamp="2026-01-30 21:43:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:08.733485889 +0000 UTC m=+109.619243944" watchObservedRunningTime="2026-01-30 21:45:08.733798889 +0000 UTC m=+109.619556914" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.746116 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=35.746094799 podStartE2EDuration="35.746094799s" podCreationTimestamp="2026-01-30 21:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:08.744423936 +0000 UTC m=+109.630181961" watchObservedRunningTime="2026-01-30 21:45:08.746094799 +0000 UTC m=+109.631852824" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.800592 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=3.80055006 podStartE2EDuration="3.80055006s" podCreationTimestamp="2026-01-30 21:45:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:08.773103313 +0000 UTC m=+109.658861408" watchObservedRunningTime="2026-01-30 21:45:08.80055006 +0000 UTC m=+109.686308125" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.803951 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.865729 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-tz8jn" podStartSLOduration=88.865704713 podStartE2EDuration="1m28.865704713s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:08.865209007 +0000 UTC m=+109.750967042" watchObservedRunningTime="2026-01-30 21:45:08.865704713 +0000 UTC m=+109.751462748" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.866065 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podStartSLOduration=88.866061184 podStartE2EDuration="1m28.866061184s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:08.835209091 +0000 UTC m=+109.720967116" watchObservedRunningTime="2026-01-30 21:45:08.866061184 +0000 UTC m=+109.751819209" Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.971481 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 01:15:45.637173867 +0000 UTC Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.971587 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 21:45:08 crc kubenswrapper[4869]: I0130 21:45:08.982462 4869 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 21:45:09 crc kubenswrapper[4869]: I0130 21:45:09.466885 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" event={"ID":"b4cd3d77-6b24-4ebf-bce5-f1297dc178ec","Type":"ContainerStarted","Data":"029e6bdff24bd535984620e07e88d6a38ea81079ace9e0d11fa943316bb5d43e"} Jan 30 21:45:09 crc kubenswrapper[4869]: I0130 21:45:09.466985 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" event={"ID":"b4cd3d77-6b24-4ebf-bce5-f1297dc178ec","Type":"ContainerStarted","Data":"e7677605f25084c802fa212a0b9efed6ef2ad345be485c2d0f22c26f5dcba240"} Jan 30 21:45:09 crc kubenswrapper[4869]: I0130 21:45:09.494394 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cxvtp" podStartSLOduration=89.494371061 podStartE2EDuration="1m29.494371061s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:09.493208275 +0000 UTC m=+110.378966310" watchObservedRunningTime="2026-01-30 21:45:09.494371061 +0000 UTC m=+110.380129086" Jan 30 21:45:09 crc kubenswrapper[4869]: I0130 21:45:09.876354 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:09 crc kubenswrapper[4869]: I0130 21:45:09.876402 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:09 crc kubenswrapper[4869]: I0130 21:45:09.876464 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:09 crc kubenswrapper[4869]: I0130 21:45:09.876498 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:09 crc kubenswrapper[4869]: E0130 21:45:09.878530 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:09 crc kubenswrapper[4869]: E0130 21:45:09.878702 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:09 crc kubenswrapper[4869]: E0130 21:45:09.878882 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:09 crc kubenswrapper[4869]: E0130 21:45:09.878974 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:11 crc kubenswrapper[4869]: I0130 21:45:11.876916 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:11 crc kubenswrapper[4869]: I0130 21:45:11.877015 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:11 crc kubenswrapper[4869]: I0130 21:45:11.877066 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:11 crc kubenswrapper[4869]: E0130 21:45:11.877101 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:11 crc kubenswrapper[4869]: E0130 21:45:11.877208 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:11 crc kubenswrapper[4869]: E0130 21:45:11.877282 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:11 crc kubenswrapper[4869]: I0130 21:45:11.877489 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:11 crc kubenswrapper[4869]: E0130 21:45:11.877557 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:13 crc kubenswrapper[4869]: I0130 21:45:13.876291 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:13 crc kubenswrapper[4869]: E0130 21:45:13.876410 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:13 crc kubenswrapper[4869]: I0130 21:45:13.876417 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:13 crc kubenswrapper[4869]: I0130 21:45:13.876438 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:13 crc kubenswrapper[4869]: I0130 21:45:13.876460 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:13 crc kubenswrapper[4869]: E0130 21:45:13.876522 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:13 crc kubenswrapper[4869]: E0130 21:45:13.876605 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:13 crc kubenswrapper[4869]: E0130 21:45:13.876667 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:15 crc kubenswrapper[4869]: I0130 21:45:15.488048 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tz8jn_dac3c503-e284-4df8-ae5e-0084a884e456/kube-multus/1.log" Jan 30 21:45:15 crc kubenswrapper[4869]: I0130 21:45:15.488598 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tz8jn_dac3c503-e284-4df8-ae5e-0084a884e456/kube-multus/0.log" Jan 30 21:45:15 crc kubenswrapper[4869]: I0130 21:45:15.488656 4869 generic.go:334] "Generic (PLEG): container finished" podID="dac3c503-e284-4df8-ae5e-0084a884e456" containerID="09e707217ddad89be77f68915c4948c1bbc2e44066f16cce7e255a5a91c1e101" exitCode=1 Jan 30 21:45:15 crc kubenswrapper[4869]: I0130 21:45:15.488702 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tz8jn" event={"ID":"dac3c503-e284-4df8-ae5e-0084a884e456","Type":"ContainerDied","Data":"09e707217ddad89be77f68915c4948c1bbc2e44066f16cce7e255a5a91c1e101"} Jan 30 21:45:15 crc kubenswrapper[4869]: I0130 21:45:15.488785 4869 scope.go:117] "RemoveContainer" containerID="6e6d47a8f0a09eac03369c1f21e44d8ec6aaf85d4d7ad180579ecd30a6311061" Jan 30 21:45:15 crc kubenswrapper[4869]: I0130 21:45:15.489288 4869 scope.go:117] "RemoveContainer" containerID="09e707217ddad89be77f68915c4948c1bbc2e44066f16cce7e255a5a91c1e101" Jan 30 21:45:15 crc kubenswrapper[4869]: E0130 21:45:15.489478 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-tz8jn_openshift-multus(dac3c503-e284-4df8-ae5e-0084a884e456)\"" pod="openshift-multus/multus-tz8jn" podUID="dac3c503-e284-4df8-ae5e-0084a884e456" Jan 30 21:45:15 crc kubenswrapper[4869]: I0130 21:45:15.876250 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:15 crc kubenswrapper[4869]: I0130 21:45:15.876286 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:15 crc kubenswrapper[4869]: I0130 21:45:15.876294 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:15 crc kubenswrapper[4869]: I0130 21:45:15.876401 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:15 crc kubenswrapper[4869]: E0130 21:45:15.876415 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:15 crc kubenswrapper[4869]: E0130 21:45:15.876472 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:15 crc kubenswrapper[4869]: E0130 21:45:15.876525 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:15 crc kubenswrapper[4869]: E0130 21:45:15.876576 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:16 crc kubenswrapper[4869]: I0130 21:45:16.492966 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tz8jn_dac3c503-e284-4df8-ae5e-0084a884e456/kube-multus/1.log" Jan 30 21:45:16 crc kubenswrapper[4869]: I0130 21:45:16.878116 4869 scope.go:117] "RemoveContainer" containerID="92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21" Jan 30 21:45:16 crc kubenswrapper[4869]: E0130 21:45:16.878456 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-stqvf_openshift-ovn-kubernetes(c39d4fe5-06cd-4ea4-8336-bd481332c475)\"" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" Jan 30 21:45:17 crc kubenswrapper[4869]: I0130 21:45:17.876774 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:17 crc kubenswrapper[4869]: I0130 21:45:17.876872 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:17 crc kubenswrapper[4869]: I0130 21:45:17.877033 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:17 crc kubenswrapper[4869]: E0130 21:45:17.877035 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:17 crc kubenswrapper[4869]: I0130 21:45:17.877143 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:17 crc kubenswrapper[4869]: E0130 21:45:17.877277 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:17 crc kubenswrapper[4869]: E0130 21:45:17.877384 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:17 crc kubenswrapper[4869]: E0130 21:45:17.877477 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:19 crc kubenswrapper[4869]: I0130 21:45:19.876471 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:19 crc kubenswrapper[4869]: E0130 21:45:19.877696 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:19 crc kubenswrapper[4869]: I0130 21:45:19.877757 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:19 crc kubenswrapper[4869]: I0130 21:45:19.877790 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:19 crc kubenswrapper[4869]: E0130 21:45:19.877817 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:19 crc kubenswrapper[4869]: I0130 21:45:19.877756 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:19 crc kubenswrapper[4869]: E0130 21:45:19.877875 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:19 crc kubenswrapper[4869]: E0130 21:45:19.878044 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:19 crc kubenswrapper[4869]: E0130 21:45:19.905769 4869 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 21:45:19 crc kubenswrapper[4869]: E0130 21:45:19.972646 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:45:21 crc kubenswrapper[4869]: I0130 21:45:21.876356 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:21 crc kubenswrapper[4869]: I0130 21:45:21.876403 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:21 crc kubenswrapper[4869]: I0130 21:45:21.876457 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:21 crc kubenswrapper[4869]: I0130 21:45:21.876356 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:21 crc kubenswrapper[4869]: E0130 21:45:21.876500 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:21 crc kubenswrapper[4869]: E0130 21:45:21.876909 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:21 crc kubenswrapper[4869]: E0130 21:45:21.877130 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:21 crc kubenswrapper[4869]: E0130 21:45:21.877236 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:23 crc kubenswrapper[4869]: I0130 21:45:23.876455 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:23 crc kubenswrapper[4869]: E0130 21:45:23.876926 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:23 crc kubenswrapper[4869]: I0130 21:45:23.876566 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:23 crc kubenswrapper[4869]: E0130 21:45:23.877062 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:23 crc kubenswrapper[4869]: I0130 21:45:23.876584 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:23 crc kubenswrapper[4869]: E0130 21:45:23.877151 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:23 crc kubenswrapper[4869]: I0130 21:45:23.876549 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:23 crc kubenswrapper[4869]: E0130 21:45:23.877229 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:24 crc kubenswrapper[4869]: E0130 21:45:24.974218 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:45:25 crc kubenswrapper[4869]: I0130 21:45:25.876247 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:25 crc kubenswrapper[4869]: E0130 21:45:25.876387 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:25 crc kubenswrapper[4869]: I0130 21:45:25.876265 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:25 crc kubenswrapper[4869]: E0130 21:45:25.876461 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:25 crc kubenswrapper[4869]: I0130 21:45:25.876256 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:25 crc kubenswrapper[4869]: E0130 21:45:25.876515 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:25 crc kubenswrapper[4869]: I0130 21:45:25.876247 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:25 crc kubenswrapper[4869]: E0130 21:45:25.876555 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:27 crc kubenswrapper[4869]: I0130 21:45:27.876629 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:27 crc kubenswrapper[4869]: I0130 21:45:27.876681 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:27 crc kubenswrapper[4869]: I0130 21:45:27.876676 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:27 crc kubenswrapper[4869]: I0130 21:45:27.876696 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:27 crc kubenswrapper[4869]: E0130 21:45:27.876778 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:27 crc kubenswrapper[4869]: E0130 21:45:27.876949 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:27 crc kubenswrapper[4869]: E0130 21:45:27.877039 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:27 crc kubenswrapper[4869]: E0130 21:45:27.877110 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:28 crc kubenswrapper[4869]: I0130 21:45:28.877444 4869 scope.go:117] "RemoveContainer" containerID="09e707217ddad89be77f68915c4948c1bbc2e44066f16cce7e255a5a91c1e101" Jan 30 21:45:29 crc kubenswrapper[4869]: I0130 21:45:29.532154 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tz8jn_dac3c503-e284-4df8-ae5e-0084a884e456/kube-multus/1.log" Jan 30 21:45:29 crc kubenswrapper[4869]: I0130 21:45:29.532207 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tz8jn" event={"ID":"dac3c503-e284-4df8-ae5e-0084a884e456","Type":"ContainerStarted","Data":"a1ac73f7852a42d020dcf55e34ad0e1f39e08d7fbc25fee2d0148150ba37264b"} Jan 30 21:45:29 crc kubenswrapper[4869]: I0130 21:45:29.877252 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:29 crc kubenswrapper[4869]: I0130 21:45:29.877293 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:29 crc kubenswrapper[4869]: I0130 21:45:29.877252 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:29 crc kubenswrapper[4869]: I0130 21:45:29.877293 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:29 crc kubenswrapper[4869]: E0130 21:45:29.877803 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:29 crc kubenswrapper[4869]: E0130 21:45:29.877969 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:29 crc kubenswrapper[4869]: E0130 21:45:29.878013 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:29 crc kubenswrapper[4869]: E0130 21:45:29.878069 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:29 crc kubenswrapper[4869]: E0130 21:45:29.974640 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:45:31 crc kubenswrapper[4869]: I0130 21:45:31.876568 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:31 crc kubenswrapper[4869]: I0130 21:45:31.876643 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:31 crc kubenswrapper[4869]: I0130 21:45:31.876650 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:31 crc kubenswrapper[4869]: I0130 21:45:31.876729 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:31 crc kubenswrapper[4869]: E0130 21:45:31.877105 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:31 crc kubenswrapper[4869]: E0130 21:45:31.877554 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:31 crc kubenswrapper[4869]: E0130 21:45:31.877726 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:31 crc kubenswrapper[4869]: E0130 21:45:31.878448 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:31 crc kubenswrapper[4869]: I0130 21:45:31.878931 4869 scope.go:117] "RemoveContainer" containerID="92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21" Jan 30 21:45:32 crc kubenswrapper[4869]: I0130 21:45:32.544773 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovnkube-controller/3.log" Jan 30 21:45:32 crc kubenswrapper[4869]: I0130 21:45:32.548011 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerStarted","Data":"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f"} Jan 30 21:45:32 crc kubenswrapper[4869]: I0130 21:45:32.548502 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:45:32 crc kubenswrapper[4869]: I0130 21:45:32.589412 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" podStartSLOduration=112.58938029 podStartE2EDuration="1m52.58938029s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:32.587332726 +0000 UTC m=+133.473090771" watchObservedRunningTime="2026-01-30 21:45:32.58938029 +0000 UTC m=+133.475138335" Jan 30 21:45:32 crc kubenswrapper[4869]: I0130 21:45:32.832239 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-45w6p"] Jan 30 21:45:32 crc kubenswrapper[4869]: I0130 21:45:32.832358 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:32 crc kubenswrapper[4869]: E0130 21:45:32.832446 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:33 crc kubenswrapper[4869]: I0130 21:45:33.876340 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:33 crc kubenswrapper[4869]: I0130 21:45:33.876413 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:33 crc kubenswrapper[4869]: I0130 21:45:33.876477 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:33 crc kubenswrapper[4869]: I0130 21:45:33.876358 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:33 crc kubenswrapper[4869]: E0130 21:45:33.876545 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:33 crc kubenswrapper[4869]: E0130 21:45:33.876693 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:33 crc kubenswrapper[4869]: E0130 21:45:33.876815 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:33 crc kubenswrapper[4869]: E0130 21:45:33.876882 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:34 crc kubenswrapper[4869]: E0130 21:45:34.976436 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:45:35 crc kubenswrapper[4869]: I0130 21:45:35.876140 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:35 crc kubenswrapper[4869]: I0130 21:45:35.876221 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:35 crc kubenswrapper[4869]: I0130 21:45:35.876240 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:35 crc kubenswrapper[4869]: I0130 21:45:35.876182 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:35 crc kubenswrapper[4869]: E0130 21:45:35.876335 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:35 crc kubenswrapper[4869]: E0130 21:45:35.876409 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:35 crc kubenswrapper[4869]: E0130 21:45:35.876564 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:35 crc kubenswrapper[4869]: E0130 21:45:35.876656 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:37 crc kubenswrapper[4869]: I0130 21:45:37.876353 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:37 crc kubenswrapper[4869]: I0130 21:45:37.876366 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:37 crc kubenswrapper[4869]: I0130 21:45:37.876485 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:37 crc kubenswrapper[4869]: E0130 21:45:37.876747 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:37 crc kubenswrapper[4869]: E0130 21:45:37.876943 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:37 crc kubenswrapper[4869]: E0130 21:45:37.877000 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:37 crc kubenswrapper[4869]: I0130 21:45:37.877309 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:37 crc kubenswrapper[4869]: E0130 21:45:37.877406 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:39 crc kubenswrapper[4869]: I0130 21:45:39.876109 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:39 crc kubenswrapper[4869]: I0130 21:45:39.876138 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:39 crc kubenswrapper[4869]: I0130 21:45:39.876178 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:39 crc kubenswrapper[4869]: I0130 21:45:39.876199 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:39 crc kubenswrapper[4869]: E0130 21:45:39.877624 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:45:39 crc kubenswrapper[4869]: E0130 21:45:39.877733 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:45:39 crc kubenswrapper[4869]: E0130 21:45:39.877833 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-45w6p" podUID="b980f4db-64d3-48c9-9ff8-18f23c4888cd" Jan 30 21:45:39 crc kubenswrapper[4869]: E0130 21:45:39.877906 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:45:41 crc kubenswrapper[4869]: I0130 21:45:41.876109 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:41 crc kubenswrapper[4869]: I0130 21:45:41.876200 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:41 crc kubenswrapper[4869]: I0130 21:45:41.876414 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:41 crc kubenswrapper[4869]: I0130 21:45:41.876421 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:45:41 crc kubenswrapper[4869]: I0130 21:45:41.879251 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 21:45:41 crc kubenswrapper[4869]: I0130 21:45:41.879251 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 21:45:41 crc kubenswrapper[4869]: I0130 21:45:41.879307 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 21:45:41 crc kubenswrapper[4869]: I0130 21:45:41.879409 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 21:45:41 crc kubenswrapper[4869]: I0130 21:45:41.879446 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 21:45:41 crc kubenswrapper[4869]: I0130 21:45:41.879691 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 21:45:47 crc kubenswrapper[4869]: I0130 21:45:47.728767 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:47 crc kubenswrapper[4869]: E0130 21:45:47.729123 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:47:49.72908 +0000 UTC m=+270.614838035 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:47 crc kubenswrapper[4869]: I0130 21:45:47.830060 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:47 crc kubenswrapper[4869]: I0130 21:45:47.830130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:47 crc kubenswrapper[4869]: I0130 21:45:47.830174 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:47 crc kubenswrapper[4869]: I0130 21:45:47.830227 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:47 crc kubenswrapper[4869]: I0130 21:45:47.831755 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:47 crc kubenswrapper[4869]: I0130 21:45:47.837263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:47 crc kubenswrapper[4869]: I0130 21:45:47.837940 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:47 crc kubenswrapper[4869]: I0130 21:45:47.839128 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:47 crc kubenswrapper[4869]: I0130 21:45:47.895780 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:45:47 crc kubenswrapper[4869]: I0130 21:45:47.905368 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:47 crc kubenswrapper[4869]: I0130 21:45:47.926637 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:45:48 crc kubenswrapper[4869]: W0130 21:45:48.393735 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-14541db05977b6687efe53d5aa632196b2c6657558d063ce97ed592c361562e2 WatchSource:0}: Error finding container 14541db05977b6687efe53d5aa632196b2c6657558d063ce97ed592c361562e2: Status 404 returned error can't find the container with id 14541db05977b6687efe53d5aa632196b2c6657558d063ce97ed592c361562e2 Jan 30 21:45:48 crc kubenswrapper[4869]: I0130 21:45:48.601404 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c47c8a84c13a1dfd6757a3bffe312551b41d625c1545b0b24e2b43f4cb995292"} Jan 30 21:45:48 crc kubenswrapper[4869]: I0130 21:45:48.601504 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"14541db05977b6687efe53d5aa632196b2c6657558d063ce97ed592c361562e2"} Jan 30 21:45:48 crc kubenswrapper[4869]: I0130 21:45:48.607332 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7e968bb19bbc61e1b59a0e4723e28a477fb504e95065c1a8daa665be361dd774"} Jan 30 21:45:48 crc kubenswrapper[4869]: I0130 21:45:48.607422 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9cd4a8c22a8e3c3223804814a524c3b302076ca9ce9b01824230811a5a6b8b59"} Jan 30 21:45:48 crc kubenswrapper[4869]: I0130 21:45:48.611971 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b437acbd6cd48b0ca21efbffa71330173baf416fee227bb2f9d6175f71d71940"} Jan 30 21:45:48 crc kubenswrapper[4869]: I0130 21:45:48.612006 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f24a15a846fc1421b3c2f08690ddf2aaf7c539fe879a05ab6bdab2cd3d8a6ea4"} Jan 30 21:45:48 crc kubenswrapper[4869]: I0130 21:45:48.612208 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.206022 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.248922 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-px2s8"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.249302 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:45:49 crc kubenswrapper[4869]: W0130 21:45:49.253646 4869 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 30 21:45:49 crc kubenswrapper[4869]: E0130 21:45:49.253684 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.253812 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.255842 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-565td"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.256538 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.256667 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4kznw"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.257479 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.258713 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.259333 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.262227 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-mq7xx"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.262784 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.264651 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8brpl"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.265264 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8brpl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.265808 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7qhjt"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.266243 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7qhjt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.268002 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6jqjl"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.268520 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kgmf9"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.269079 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-kgmf9" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.269152 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-k4rff"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.269435 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6jqjl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.269730 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-k4rff" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.270697 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.315182 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.315531 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.316088 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.316114 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.316677 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.317300 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-npnfg"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.317408 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.317706 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.317848 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.317879 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.317955 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.318000 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.318054 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.317880 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-npnfg" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.318182 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.318266 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.318874 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.319005 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.319051 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.319115 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.319213 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.319263 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.319477 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.319492 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.319612 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.319714 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.319735 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.319837 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.319869 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.319984 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.320060 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.320186 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.320244 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.320297 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.320407 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.320191 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.320576 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.320696 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.320710 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.320720 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.320777 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.320845 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.320979 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.320996 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.321102 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.321166 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.321268 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.321419 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.320419 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.321700 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.321869 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.323672 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.323812 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.324003 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.325023 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.331062 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.333915 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.334149 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v5kfz"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.334406 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.334629 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.335043 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.335257 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.335323 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.335370 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.335447 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.335526 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.335542 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.339114 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.340040 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.342780 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348281 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7tlh\" (UniqueName: \"kubernetes.io/projected/98f93152-3943-4bb2-ac4b-d2d79286e19d-kube-api-access-c7tlh\") pod \"authentication-operator-69f744f599-565td\" (UID: \"98f93152-3943-4bb2-ac4b-d2d79286e19d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348315 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgdbf\" (UniqueName: \"kubernetes.io/projected/8922ebf0-c10f-4963-b517-e51ba2284e99-kube-api-access-wgdbf\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348335 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98f93152-3943-4bb2-ac4b-d2d79286e19d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-565td\" (UID: \"98f93152-3943-4bb2-ac4b-d2d79286e19d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348350 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98f93152-3943-4bb2-ac4b-d2d79286e19d-service-ca-bundle\") pod \"authentication-operator-69f744f599-565td\" (UID: \"98f93152-3943-4bb2-ac4b-d2d79286e19d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348366 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d91a880-3f38-492b-b797-6fc24c2da65e-etcd-client\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348383 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgw2t\" (UniqueName: \"kubernetes.io/projected/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-kube-api-access-sgw2t\") pod \"controller-manager-879f6c89f-px2s8\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348398 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6d91a880-3f38-492b-b797-6fc24c2da65e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348414 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8922ebf0-c10f-4963-b517-e51ba2284e99-audit\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348434 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f1da8dd6-5d73-4fc5-9e46-40c77930b2da-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rkj8m\" (UID: \"f1da8dd6-5d73-4fc5-9e46-40c77930b2da\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348455 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl5qw\" (UniqueName: \"kubernetes.io/projected/d8bca2a0-12fd-48ab-9507-ca0824a394cb-kube-api-access-pl5qw\") pod \"machine-approver-56656f9798-2wk5x\" (UID: \"d8bca2a0-12fd-48ab-9507-ca0824a394cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348469 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c31c9863-59b6-490a-95d1-53d4a9707117-console-config\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348485 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8922ebf0-c10f-4963-b517-e51ba2284e99-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348504 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d8bca2a0-12fd-48ab-9507-ca0824a394cb-machine-approver-tls\") pod \"machine-approver-56656f9798-2wk5x\" (UID: \"d8bca2a0-12fd-48ab-9507-ca0824a394cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348521 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c1f0b262-4d72-49a2-aa45-918fbc89a9f2-images\") pod \"machine-api-operator-5694c8668f-npnfg\" (UID: \"c1f0b262-4d72-49a2-aa45-918fbc89a9f2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-npnfg" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348545 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86ggd\" (UniqueName: \"kubernetes.io/projected/901784d6-cd46-4181-a06f-f88c49faac0e-kube-api-access-86ggd\") pod \"cluster-samples-operator-665b6dd947-7qhjt\" (UID: \"901784d6-cd46-4181-a06f-f88c49faac0e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7qhjt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348559 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5894a987-705c-4bf2-a99f-ed766e5618db-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6jqjl\" (UID: \"5894a987-705c-4bf2-a99f-ed766e5618db\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6jqjl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348575 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8922ebf0-c10f-4963-b517-e51ba2284e99-audit-dir\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348628 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1f418006-5265-449a-8e91-64144e311a6b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cjzmv\" (UID: \"1f418006-5265-449a-8e91-64144e311a6b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348663 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-client-ca\") pod \"controller-manager-879f6c89f-px2s8\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348684 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f93152-3943-4bb2-ac4b-d2d79286e19d-config\") pod \"authentication-operator-69f744f599-565td\" (UID: \"98f93152-3943-4bb2-ac4b-d2d79286e19d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348707 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d91a880-3f38-492b-b797-6fc24c2da65e-serving-cert\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348729 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6kn8\" (UniqueName: \"kubernetes.io/projected/5894a987-705c-4bf2-a99f-ed766e5618db-kube-api-access-k6kn8\") pod \"openshift-apiserver-operator-796bbdcf4f-6jqjl\" (UID: \"5894a987-705c-4bf2-a99f-ed766e5618db\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6jqjl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348766 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8922ebf0-c10f-4963-b517-e51ba2284e99-image-import-ca\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348788 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwchr\" (UniqueName: \"kubernetes.io/projected/c31c9863-59b6-490a-95d1-53d4a9707117-kube-api-access-vwchr\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348805 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lpqp\" (UniqueName: \"kubernetes.io/projected/49697d03-35f3-4fa3-9141-2bb8ae8eccab-kube-api-access-8lpqp\") pod \"downloads-7954f5f757-k4rff\" (UID: \"49697d03-35f3-4fa3-9141-2bb8ae8eccab\") " pod="openshift-console/downloads-7954f5f757-k4rff" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348829 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d91a880-3f38-492b-b797-6fc24c2da65e-audit-policies\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348844 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348845 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d91a880-3f38-492b-b797-6fc24c2da65e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348923 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/901784d6-cd46-4181-a06f-f88c49faac0e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7qhjt\" (UID: \"901784d6-cd46-4181-a06f-f88c49faac0e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7qhjt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348944 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx4hs\" (UniqueName: \"kubernetes.io/projected/c1f0b262-4d72-49a2-aa45-918fbc89a9f2-kube-api-access-wx4hs\") pod \"machine-api-operator-5694c8668f-npnfg\" (UID: \"c1f0b262-4d72-49a2-aa45-918fbc89a9f2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-npnfg" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348962 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7584681-881d-465c-b4e8-121404518807-config\") pod \"console-operator-58897d9998-8brpl\" (UID: \"c7584681-881d-465c-b4e8-121404518807\") " pod="openshift-console-operator/console-operator-58897d9998-8brpl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348977 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f93152-3943-4bb2-ac4b-d2d79286e19d-serving-cert\") pod \"authentication-operator-69f744f599-565td\" (UID: \"98f93152-3943-4bb2-ac4b-d2d79286e19d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.348995 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8922ebf0-c10f-4963-b517-e51ba2284e99-serving-cert\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349014 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-serving-cert\") pod \"controller-manager-879f6c89f-px2s8\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349058 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wm5m\" (UniqueName: \"kubernetes.io/projected/6d91a880-3f38-492b-b797-6fc24c2da65e-kube-api-access-7wm5m\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349095 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-config\") pod \"controller-manager-879f6c89f-px2s8\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349111 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-px2s8\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349130 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d91a880-3f38-492b-b797-6fc24c2da65e-audit-dir\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349164 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5894a987-705c-4bf2-a99f-ed766e5618db-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6jqjl\" (UID: \"5894a987-705c-4bf2-a99f-ed766e5618db\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6jqjl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349191 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1da8dd6-5d73-4fc5-9e46-40c77930b2da-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rkj8m\" (UID: \"f1da8dd6-5d73-4fc5-9e46-40c77930b2da\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349210 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx5xq\" (UniqueName: \"kubernetes.io/projected/f1da8dd6-5d73-4fc5-9e46-40c77930b2da-kube-api-access-mx5xq\") pod \"cluster-image-registry-operator-dc59b4c8b-rkj8m\" (UID: \"f1da8dd6-5d73-4fc5-9e46-40c77930b2da\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349231 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1f0b262-4d72-49a2-aa45-918fbc89a9f2-config\") pod \"machine-api-operator-5694c8668f-npnfg\" (UID: \"c1f0b262-4d72-49a2-aa45-918fbc89a9f2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-npnfg" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349247 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1da8dd6-5d73-4fc5-9e46-40c77930b2da-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rkj8m\" (UID: \"f1da8dd6-5d73-4fc5-9e46-40c77930b2da\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349269 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8922ebf0-c10f-4963-b517-e51ba2284e99-node-pullsecrets\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349290 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8922ebf0-c10f-4963-b517-e51ba2284e99-etcd-serving-ca\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349339 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8922ebf0-c10f-4963-b517-e51ba2284e99-encryption-config\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349358 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8bca2a0-12fd-48ab-9507-ca0824a394cb-auth-proxy-config\") pod \"machine-approver-56656f9798-2wk5x\" (UID: \"d8bca2a0-12fd-48ab-9507-ca0824a394cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349375 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c31c9863-59b6-490a-95d1-53d4a9707117-trusted-ca-bundle\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349393 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1f0b262-4d72-49a2-aa45-918fbc89a9f2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-npnfg\" (UID: \"c1f0b262-4d72-49a2-aa45-918fbc89a9f2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-npnfg" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349413 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c31c9863-59b6-490a-95d1-53d4a9707117-oauth-serving-cert\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349428 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jcv4\" (UniqueName: \"kubernetes.io/projected/400bd72e-e645-473e-b285-497d96f567ac-kube-api-access-6jcv4\") pod \"dns-operator-744455d44c-kgmf9\" (UID: \"400bd72e-e645-473e-b285-497d96f567ac\") " pod="openshift-dns-operator/dns-operator-744455d44c-kgmf9" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349456 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7584681-881d-465c-b4e8-121404518807-serving-cert\") pod \"console-operator-58897d9998-8brpl\" (UID: \"c7584681-881d-465c-b4e8-121404518807\") " pod="openshift-console-operator/console-operator-58897d9998-8brpl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349474 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7584681-881d-465c-b4e8-121404518807-trusted-ca\") pod \"console-operator-58897d9998-8brpl\" (UID: \"c7584681-881d-465c-b4e8-121404518807\") " pod="openshift-console-operator/console-operator-58897d9998-8brpl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349492 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6d91a880-3f38-492b-b797-6fc24c2da65e-encryption-config\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349511 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dphxl\" (UniqueName: \"kubernetes.io/projected/c7584681-881d-465c-b4e8-121404518807-kube-api-access-dphxl\") pod \"console-operator-58897d9998-8brpl\" (UID: \"c7584681-881d-465c-b4e8-121404518807\") " pod="openshift-console-operator/console-operator-58897d9998-8brpl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349528 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8922ebf0-c10f-4963-b517-e51ba2284e99-config\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349545 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8922ebf0-c10f-4963-b517-e51ba2284e99-etcd-client\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349560 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8bca2a0-12fd-48ab-9507-ca0824a394cb-config\") pod \"machine-approver-56656f9798-2wk5x\" (UID: \"d8bca2a0-12fd-48ab-9507-ca0824a394cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349577 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f418006-5265-449a-8e91-64144e311a6b-serving-cert\") pod \"openshift-config-operator-7777fb866f-cjzmv\" (UID: \"1f418006-5265-449a-8e91-64144e311a6b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349594 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2h47\" (UniqueName: \"kubernetes.io/projected/1f418006-5265-449a-8e91-64144e311a6b-kube-api-access-q2h47\") pod \"openshift-config-operator-7777fb866f-cjzmv\" (UID: \"1f418006-5265-449a-8e91-64144e311a6b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349609 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c31c9863-59b6-490a-95d1-53d4a9707117-service-ca\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349624 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c31c9863-59b6-490a-95d1-53d4a9707117-console-serving-cert\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349642 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c31c9863-59b6-490a-95d1-53d4a9707117-console-oauth-config\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.349658 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/400bd72e-e645-473e-b285-497d96f567ac-metrics-tls\") pod \"dns-operator-744455d44c-kgmf9\" (UID: \"400bd72e-e645-473e-b285-497d96f567ac\") " pod="openshift-dns-operator/dns-operator-744455d44c-kgmf9" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.351955 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.377934 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fgmnt"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.393442 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.393597 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.393837 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.394882 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.395173 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.396422 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.396868 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.396968 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.397131 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.397302 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v5kfz" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.398013 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.398147 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.402624 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.402827 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.402986 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.403087 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.403292 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.404224 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.404600 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j58b4"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.404626 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.404947 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.405069 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-px2s8"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.405100 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bgcv6"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.405494 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-jqxjt"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.406093 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hgz5t"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.406704 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4t6ks"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.407159 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-565td"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.407189 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4kznw"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.407209 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6bxlw"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.407641 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.408153 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.408751 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pn7q"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.408990 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.409256 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jqxjt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.409870 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pn7q" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.410916 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.411505 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.411637 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.411959 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.412082 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.412197 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.412234 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.412361 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.412685 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.413141 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.413762 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4p68r"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.414263 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.414393 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4t6ks" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.414489 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.414647 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.414681 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hgz5t" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.414923 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.415023 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4p68r" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.418343 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.418520 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.418841 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6bxlw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.420056 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.420196 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.420290 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.420657 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.420864 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.420914 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.421112 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.421229 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.421423 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.421995 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.422180 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.422353 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.424856 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.424961 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b86ql"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.425731 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.426880 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kkgzt"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.427553 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-brwlx"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.428257 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brwlx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.430302 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m95d6"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.431083 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m95d6" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.431760 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kkgzt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.434042 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5dlg7"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.435108 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5dlg7" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.436840 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.439614 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-v5sqv"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.441155 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-v5sqv" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.442293 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-2zzvw"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.462678 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8922ebf0-c10f-4963-b517-e51ba2284e99-etcd-client\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.462937 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.463083 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8bca2a0-12fd-48ab-9507-ca0824a394cb-config\") pod \"machine-approver-56656f9798-2wk5x\" (UID: \"d8bca2a0-12fd-48ab-9507-ca0824a394cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.463118 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c31c9863-59b6-490a-95d1-53d4a9707117-service-ca\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.463248 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f418006-5265-449a-8e91-64144e311a6b-serving-cert\") pod \"openshift-config-operator-7777fb866f-cjzmv\" (UID: \"1f418006-5265-449a-8e91-64144e311a6b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.463288 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2h47\" (UniqueName: \"kubernetes.io/projected/1f418006-5265-449a-8e91-64144e311a6b-kube-api-access-q2h47\") pod \"openshift-config-operator-7777fb866f-cjzmv\" (UID: \"1f418006-5265-449a-8e91-64144e311a6b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.463416 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.463456 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c31c9863-59b6-490a-95d1-53d4a9707117-console-serving-cert\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.463593 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c31c9863-59b6-490a-95d1-53d4a9707117-console-oauth-config\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.463627 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/400bd72e-e645-473e-b285-497d96f567ac-metrics-tls\") pod \"dns-operator-744455d44c-kgmf9\" (UID: \"400bd72e-e645-473e-b285-497d96f567ac\") " pod="openshift-dns-operator/dns-operator-744455d44c-kgmf9" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.463823 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mktrn\" (UniqueName: \"kubernetes.io/projected/d151f3f9-526d-40a2-8021-13aa609295b1-kube-api-access-mktrn\") pod \"multus-admission-controller-857f4d67dd-hgz5t\" (UID: \"d151f3f9-526d-40a2-8021-13aa609295b1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hgz5t" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.464401 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.464585 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgdbf\" (UniqueName: \"kubernetes.io/projected/8922ebf0-c10f-4963-b517-e51ba2284e99-kube-api-access-wgdbf\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.464690 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7tlh\" (UniqueName: \"kubernetes.io/projected/98f93152-3943-4bb2-ac4b-d2d79286e19d-kube-api-access-c7tlh\") pod \"authentication-operator-69f744f599-565td\" (UID: \"98f93152-3943-4bb2-ac4b-d2d79286e19d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.464748 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d91a880-3f38-492b-b797-6fc24c2da65e-etcd-client\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.464793 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm6zq\" (UniqueName: \"kubernetes.io/projected/e7c790ef-ae52-4809-b6f2-088811793867-kube-api-access-wm6zq\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.464829 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98f93152-3943-4bb2-ac4b-d2d79286e19d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-565td\" (UID: \"98f93152-3943-4bb2-ac4b-d2d79286e19d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.464864 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98f93152-3943-4bb2-ac4b-d2d79286e19d-service-ca-bundle\") pod \"authentication-operator-69f744f599-565td\" (UID: \"98f93152-3943-4bb2-ac4b-d2d79286e19d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.464926 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.464972 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgw2t\" (UniqueName: \"kubernetes.io/projected/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-kube-api-access-sgw2t\") pod \"controller-manager-879f6c89f-px2s8\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465001 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6d91a880-3f38-492b-b797-6fc24c2da65e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465036 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f1da8dd6-5d73-4fc5-9e46-40c77930b2da-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rkj8m\" (UID: \"f1da8dd6-5d73-4fc5-9e46-40c77930b2da\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465081 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76a78a93-c264-4c12-b89a-b265e6731c7e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6tz69\" (UID: \"76a78a93-c264-4c12-b89a-b265e6731c7e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465123 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8922ebf0-c10f-4963-b517-e51ba2284e99-audit\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465161 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8922ebf0-c10f-4963-b517-e51ba2284e99-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465194 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d8bca2a0-12fd-48ab-9507-ca0824a394cb-machine-approver-tls\") pod \"machine-approver-56656f9798-2wk5x\" (UID: \"d8bca2a0-12fd-48ab-9507-ca0824a394cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465233 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl5qw\" (UniqueName: \"kubernetes.io/projected/d8bca2a0-12fd-48ab-9507-ca0824a394cb-kube-api-access-pl5qw\") pod \"machine-approver-56656f9798-2wk5x\" (UID: \"d8bca2a0-12fd-48ab-9507-ca0824a394cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465273 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c31c9863-59b6-490a-95d1-53d4a9707117-console-config\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465375 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465419 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr856\" (UniqueName: \"kubernetes.io/projected/76a78a93-c264-4c12-b89a-b265e6731c7e-kube-api-access-rr856\") pod \"machine-config-operator-74547568cd-6tz69\" (UID: \"76a78a93-c264-4c12-b89a-b265e6731c7e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465464 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c1f0b262-4d72-49a2-aa45-918fbc89a9f2-images\") pod \"machine-api-operator-5694c8668f-npnfg\" (UID: \"c1f0b262-4d72-49a2-aa45-918fbc89a9f2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-npnfg" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465506 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmlgg\" (UniqueName: \"kubernetes.io/projected/5feff7d0-1eb1-42d4-8891-6758fdbcb01f-kube-api-access-bmlgg\") pod \"migrator-59844c95c7-jqxjt\" (UID: \"5feff7d0-1eb1-42d4-8891-6758fdbcb01f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jqxjt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465556 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/502a480e-94d6-425b-923c-ef29c26c09a2-config\") pod \"service-ca-operator-777779d784-4p68r\" (UID: \"502a480e-94d6-425b-923c-ef29c26c09a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4p68r" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465595 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8922ebf0-c10f-4963-b517-e51ba2284e99-audit-dir\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465629 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1f418006-5265-449a-8e91-64144e311a6b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cjzmv\" (UID: \"1f418006-5265-449a-8e91-64144e311a6b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465666 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86ggd\" (UniqueName: \"kubernetes.io/projected/901784d6-cd46-4181-a06f-f88c49faac0e-kube-api-access-86ggd\") pod \"cluster-samples-operator-665b6dd947-7qhjt\" (UID: \"901784d6-cd46-4181-a06f-f88c49faac0e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7qhjt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465714 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5894a987-705c-4bf2-a99f-ed766e5618db-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6jqjl\" (UID: \"5894a987-705c-4bf2-a99f-ed766e5618db\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6jqjl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465755 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465791 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghpp5\" (UniqueName: \"kubernetes.io/projected/f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc-kube-api-access-ghpp5\") pod \"packageserver-d55dfcdfc-9zpxh\" (UID: \"f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465829 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-client-ca\") pod \"controller-manager-879f6c89f-px2s8\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465867 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f93152-3943-4bb2-ac4b-d2d79286e19d-config\") pod \"authentication-operator-69f744f599-565td\" (UID: \"98f93152-3943-4bb2-ac4b-d2d79286e19d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465936 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d91a880-3f38-492b-b797-6fc24c2da65e-serving-cert\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.465979 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6kn8\" (UniqueName: \"kubernetes.io/projected/5894a987-705c-4bf2-a99f-ed766e5618db-kube-api-access-k6kn8\") pod \"openshift-apiserver-operator-796bbdcf4f-6jqjl\" (UID: \"5894a987-705c-4bf2-a99f-ed766e5618db\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6jqjl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.466014 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.466047 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d151f3f9-526d-40a2-8021-13aa609295b1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hgz5t\" (UID: \"d151f3f9-526d-40a2-8021-13aa609295b1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hgz5t" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.466076 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwchr\" (UniqueName: \"kubernetes.io/projected/c31c9863-59b6-490a-95d1-53d4a9707117-kube-api-access-vwchr\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.466108 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lpqp\" (UniqueName: \"kubernetes.io/projected/49697d03-35f3-4fa3-9141-2bb8ae8eccab-kube-api-access-8lpqp\") pod \"downloads-7954f5f757-k4rff\" (UID: \"49697d03-35f3-4fa3-9141-2bb8ae8eccab\") " pod="openshift-console/downloads-7954f5f757-k4rff" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.466129 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.466164 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8922ebf0-c10f-4963-b517-e51ba2284e99-image-import-ca\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.466188 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d91a880-3f38-492b-b797-6fc24c2da65e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.466224 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d91a880-3f38-492b-b797-6fc24c2da65e-audit-policies\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.466252 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5d54f896-380b-4253-bcba-8576673c606e-srv-cert\") pod \"olm-operator-6b444d44fb-9f2tz\" (UID: \"5d54f896-380b-4253-bcba-8576673c606e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.466281 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5d54f896-380b-4253-bcba-8576673c606e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-9f2tz\" (UID: \"5d54f896-380b-4253-bcba-8576673c606e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.466307 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.466345 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/901784d6-cd46-4181-a06f-f88c49faac0e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7qhjt\" (UID: \"901784d6-cd46-4181-a06f-f88c49faac0e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7qhjt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.468545 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx4hs\" (UniqueName: \"kubernetes.io/projected/c1f0b262-4d72-49a2-aa45-918fbc89a9f2-kube-api-access-wx4hs\") pod \"machine-api-operator-5694c8668f-npnfg\" (UID: \"c1f0b262-4d72-49a2-aa45-918fbc89a9f2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-npnfg" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.468635 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8b78a73-2ae1-4eed-8110-943a6ae6fe04-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v5kfz\" (UID: \"a8b78a73-2ae1-4eed-8110-943a6ae6fe04\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v5kfz" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.468745 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.468779 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f93152-3943-4bb2-ac4b-d2d79286e19d-serving-cert\") pod \"authentication-operator-69f744f599-565td\" (UID: \"98f93152-3943-4bb2-ac4b-d2d79286e19d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.468809 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7584681-881d-465c-b4e8-121404518807-config\") pod \"console-operator-58897d9998-8brpl\" (UID: \"c7584681-881d-465c-b4e8-121404518807\") " pod="openshift-console-operator/console-operator-58897d9998-8brpl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.468945 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dskh\" (UniqueName: \"kubernetes.io/projected/a8b78a73-2ae1-4eed-8110-943a6ae6fe04-kube-api-access-8dskh\") pod \"openshift-controller-manager-operator-756b6f6bc6-v5kfz\" (UID: \"a8b78a73-2ae1-4eed-8110-943a6ae6fe04\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v5kfz" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.468977 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8922ebf0-c10f-4963-b517-e51ba2284e99-serving-cert\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.469002 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-serving-cert\") pod \"controller-manager-879f6c89f-px2s8\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.469031 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wm5m\" (UniqueName: \"kubernetes.io/projected/6d91a880-3f38-492b-b797-6fc24c2da65e-kube-api-access-7wm5m\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.469057 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-config\") pod \"controller-manager-879f6c89f-px2s8\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.469079 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-px2s8\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.469102 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d91a880-3f38-492b-b797-6fc24c2da65e-audit-dir\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.469680 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-serving-cert\") pod \"route-controller-manager-6576b87f9c-t4q8n\" (UID: \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.469750 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1da8dd6-5d73-4fc5-9e46-40c77930b2da-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rkj8m\" (UID: \"f1da8dd6-5d73-4fc5-9e46-40c77930b2da\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.469787 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx5xq\" (UniqueName: \"kubernetes.io/projected/f1da8dd6-5d73-4fc5-9e46-40c77930b2da-kube-api-access-mx5xq\") pod \"cluster-image-registry-operator-dc59b4c8b-rkj8m\" (UID: \"f1da8dd6-5d73-4fc5-9e46-40c77930b2da\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.469994 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5894a987-705c-4bf2-a99f-ed766e5618db-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6jqjl\" (UID: \"5894a987-705c-4bf2-a99f-ed766e5618db\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6jqjl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.470459 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/502a480e-94d6-425b-923c-ef29c26c09a2-serving-cert\") pod \"service-ca-operator-777779d784-4p68r\" (UID: \"502a480e-94d6-425b-923c-ef29c26c09a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4p68r" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.478081 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1f0b262-4d72-49a2-aa45-918fbc89a9f2-config\") pod \"machine-api-operator-5694c8668f-npnfg\" (UID: \"c1f0b262-4d72-49a2-aa45-918fbc89a9f2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-npnfg" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.478201 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76a78a93-c264-4c12-b89a-b265e6731c7e-proxy-tls\") pod \"machine-config-operator-74547568cd-6tz69\" (UID: \"76a78a93-c264-4c12-b89a-b265e6731c7e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.478225 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c31c9863-59b6-490a-95d1-53d4a9707117-console-oauth-config\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.478279 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8922ebf0-c10f-4963-b517-e51ba2284e99-node-pullsecrets\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.478317 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1da8dd6-5d73-4fc5-9e46-40c77930b2da-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rkj8m\" (UID: \"f1da8dd6-5d73-4fc5-9e46-40c77930b2da\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.478349 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc-webhook-cert\") pod \"packageserver-d55dfcdfc-9zpxh\" (UID: \"f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.478373 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98f93152-3943-4bb2-ac4b-d2d79286e19d-service-ca-bundle\") pod \"authentication-operator-69f744f599-565td\" (UID: \"98f93152-3943-4bb2-ac4b-d2d79286e19d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.478387 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c31c9863-59b6-490a-95d1-53d4a9707117-trusted-ca-bundle\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.478449 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-audit-policies\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.478501 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7c790ef-ae52-4809-b6f2-088811793867-audit-dir\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.478534 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc-apiservice-cert\") pod \"packageserver-d55dfcdfc-9zpxh\" (UID: \"f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.478620 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8922ebf0-c10f-4963-b517-e51ba2284e99-etcd-serving-ca\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.478703 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8922ebf0-c10f-4963-b517-e51ba2284e99-encryption-config\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.478732 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8bca2a0-12fd-48ab-9507-ca0824a394cb-auth-proxy-config\") pod \"machine-approver-56656f9798-2wk5x\" (UID: \"d8bca2a0-12fd-48ab-9507-ca0824a394cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.478993 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6d91a880-3f38-492b-b797-6fc24c2da65e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.479034 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c31c9863-59b6-490a-95d1-53d4a9707117-oauth-serving-cert\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.479097 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jcv4\" (UniqueName: \"kubernetes.io/projected/400bd72e-e645-473e-b285-497d96f567ac-kube-api-access-6jcv4\") pod \"dns-operator-744455d44c-kgmf9\" (UID: \"400bd72e-e645-473e-b285-497d96f567ac\") " pod="openshift-dns-operator/dns-operator-744455d44c-kgmf9" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.479131 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1f0b262-4d72-49a2-aa45-918fbc89a9f2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-npnfg\" (UID: \"c1f0b262-4d72-49a2-aa45-918fbc89a9f2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-npnfg" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.479187 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8b78a73-2ae1-4eed-8110-943a6ae6fe04-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v5kfz\" (UID: \"a8b78a73-2ae1-4eed-8110-943a6ae6fe04\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v5kfz" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.479269 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7584681-881d-465c-b4e8-121404518807-serving-cert\") pod \"console-operator-58897d9998-8brpl\" (UID: \"c7584681-881d-465c-b4e8-121404518807\") " pod="openshift-console-operator/console-operator-58897d9998-8brpl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.479307 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4ftk\" (UniqueName: \"kubernetes.io/projected/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-kube-api-access-m4ftk\") pod \"route-controller-manager-6576b87f9c-t4q8n\" (UID: \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.479381 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/76a78a93-c264-4c12-b89a-b265e6731c7e-images\") pod \"machine-config-operator-74547568cd-6tz69\" (UID: \"76a78a93-c264-4c12-b89a-b265e6731c7e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.479459 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7584681-881d-465c-b4e8-121404518807-trusted-ca\") pod \"console-operator-58897d9998-8brpl\" (UID: \"c7584681-881d-465c-b4e8-121404518807\") " pod="openshift-console-operator/console-operator-58897d9998-8brpl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.479508 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6d91a880-3f38-492b-b797-6fc24c2da65e-encryption-config\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.479536 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.479556 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc-tmpfs\") pod \"packageserver-d55dfcdfc-9zpxh\" (UID: \"f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.479613 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dphxl\" (UniqueName: \"kubernetes.io/projected/c7584681-881d-465c-b4e8-121404518807-kube-api-access-dphxl\") pod \"console-operator-58897d9998-8brpl\" (UID: \"c7584681-881d-465c-b4e8-121404518807\") " pod="openshift-console-operator/console-operator-58897d9998-8brpl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.479634 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-config\") pod \"route-controller-manager-6576b87f9c-t4q8n\" (UID: \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.479662 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-client-ca\") pod \"route-controller-manager-6576b87f9c-t4q8n\" (UID: \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.479828 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8bca2a0-12fd-48ab-9507-ca0824a394cb-auth-proxy-config\") pod \"machine-approver-56656f9798-2wk5x\" (UID: \"d8bca2a0-12fd-48ab-9507-ca0824a394cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.479923 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8922ebf0-c10f-4963-b517-e51ba2284e99-node-pullsecrets\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.480315 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8922ebf0-c10f-4963-b517-e51ba2284e99-etcd-serving-ca\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.480774 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7584681-881d-465c-b4e8-121404518807-config\") pod \"console-operator-58897d9998-8brpl\" (UID: \"c7584681-881d-465c-b4e8-121404518807\") " pod="openshift-console-operator/console-operator-58897d9998-8brpl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.481105 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1f418006-5265-449a-8e91-64144e311a6b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cjzmv\" (UID: \"1f418006-5265-449a-8e91-64144e311a6b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.481725 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1da8dd6-5d73-4fc5-9e46-40c77930b2da-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rkj8m\" (UID: \"f1da8dd6-5d73-4fc5-9e46-40c77930b2da\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.483422 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f93152-3943-4bb2-ac4b-d2d79286e19d-serving-cert\") pod \"authentication-operator-69f744f599-565td\" (UID: \"98f93152-3943-4bb2-ac4b-d2d79286e19d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.483840 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.483982 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8bca2a0-12fd-48ab-9507-ca0824a394cb-config\") pod \"machine-approver-56656f9798-2wk5x\" (UID: \"d8bca2a0-12fd-48ab-9507-ca0824a394cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.484086 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f93152-3943-4bb2-ac4b-d2d79286e19d-config\") pod \"authentication-operator-69f744f599-565td\" (UID: \"98f93152-3943-4bb2-ac4b-d2d79286e19d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.488086 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c31c9863-59b6-490a-95d1-53d4a9707117-trusted-ca-bundle\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.488520 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7584681-881d-465c-b4e8-121404518807-trusted-ca\") pod \"console-operator-58897d9998-8brpl\" (UID: \"c7584681-881d-465c-b4e8-121404518807\") " pod="openshift-console-operator/console-operator-58897d9998-8brpl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.488915 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-client-ca\") pod \"controller-manager-879f6c89f-px2s8\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.489365 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d91a880-3f38-492b-b797-6fc24c2da65e-serving-cert\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.490135 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5894a987-705c-4bf2-a99f-ed766e5618db-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6jqjl\" (UID: \"5894a987-705c-4bf2-a99f-ed766e5618db\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6jqjl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.490456 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.492179 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1f0b262-4d72-49a2-aa45-918fbc89a9f2-config\") pod \"machine-api-operator-5694c8668f-npnfg\" (UID: \"c1f0b262-4d72-49a2-aa45-918fbc89a9f2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-npnfg" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.492452 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8922ebf0-c10f-4963-b517-e51ba2284e99-encryption-config\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.492607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98f93152-3943-4bb2-ac4b-d2d79286e19d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-565td\" (UID: \"98f93152-3943-4bb2-ac4b-d2d79286e19d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.492733 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8922ebf0-c10f-4963-b517-e51ba2284e99-etcd-client\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.493027 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.493264 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8922ebf0-c10f-4963-b517-e51ba2284e99-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.493379 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8922ebf0-c10f-4963-b517-e51ba2284e99-config\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.493430 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6tsq\" (UniqueName: \"kubernetes.io/projected/5d54f896-380b-4253-bcba-8576673c606e-kube-api-access-l6tsq\") pod \"olm-operator-6b444d44fb-9f2tz\" (UID: \"5d54f896-380b-4253-bcba-8576673c606e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.493456 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.493481 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ftlt\" (UniqueName: \"kubernetes.io/projected/502a480e-94d6-425b-923c-ef29c26c09a2-kube-api-access-4ftlt\") pod \"service-ca-operator-777779d784-4p68r\" (UID: \"502a480e-94d6-425b-923c-ef29c26c09a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4p68r" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.493576 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8922ebf0-c10f-4963-b517-e51ba2284e99-audit\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.493623 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c31c9863-59b6-490a-95d1-53d4a9707117-oauth-serving-cert\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.495019 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d91a880-3f38-492b-b797-6fc24c2da65e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.495408 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d91a880-3f38-492b-b797-6fc24c2da65e-etcd-client\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.495599 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7584681-881d-465c-b4e8-121404518807-serving-cert\") pod \"console-operator-58897d9998-8brpl\" (UID: \"c7584681-881d-465c-b4e8-121404518807\") " pod="openshift-console-operator/console-operator-58897d9998-8brpl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.496171 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d91a880-3f38-492b-b797-6fc24c2da65e-audit-policies\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.496286 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7qhjt"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.496316 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8brpl"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.496327 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6jqjl"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.496339 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mq7xx"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.496350 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.496425 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8922ebf0-c10f-4963-b517-e51ba2284e99-audit-dir\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.496710 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8922ebf0-c10f-4963-b517-e51ba2284e99-config\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.496757 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-jqxjt"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.496830 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.497061 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.497562 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d91a880-3f38-492b-b797-6fc24c2da65e-audit-dir\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.498064 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c1f0b262-4d72-49a2-aa45-918fbc89a9f2-images\") pod \"machine-api-operator-5694c8668f-npnfg\" (UID: \"c1f0b262-4d72-49a2-aa45-918fbc89a9f2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-npnfg" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.498416 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1f0b262-4d72-49a2-aa45-918fbc89a9f2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-npnfg\" (UID: \"c1f0b262-4d72-49a2-aa45-918fbc89a9f2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-npnfg" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.498461 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8922ebf0-c10f-4963-b517-e51ba2284e99-image-import-ca\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.499214 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c31c9863-59b6-490a-95d1-53d4a9707117-console-config\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.499238 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c31c9863-59b6-490a-95d1-53d4a9707117-service-ca\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.499437 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-px2s8\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.500262 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.500828 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-config\") pod \"controller-manager-879f6c89f-px2s8\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.501108 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c31c9863-59b6-490a-95d1-53d4a9707117-console-serving-cert\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.501363 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d8bca2a0-12fd-48ab-9507-ca0824a394cb-machine-approver-tls\") pod \"machine-approver-56656f9798-2wk5x\" (UID: \"d8bca2a0-12fd-48ab-9507-ca0824a394cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.501666 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kgmf9"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.501836 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.502133 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5894a987-705c-4bf2-a99f-ed766e5618db-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6jqjl\" (UID: \"5894a987-705c-4bf2-a99f-ed766e5618db\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6jqjl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.502509 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f418006-5265-449a-8e91-64144e311a6b-serving-cert\") pod \"openshift-config-operator-7777fb866f-cjzmv\" (UID: \"1f418006-5265-449a-8e91-64144e311a6b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.503289 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6d91a880-3f38-492b-b797-6fc24c2da65e-encryption-config\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.503770 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/400bd72e-e645-473e-b285-497d96f567ac-metrics-tls\") pod \"dns-operator-744455d44c-kgmf9\" (UID: \"400bd72e-e645-473e-b285-497d96f567ac\") " pod="openshift-dns-operator/dns-operator-744455d44c-kgmf9" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.504092 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f1da8dd6-5d73-4fc5-9e46-40c77930b2da-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rkj8m\" (UID: \"f1da8dd6-5d73-4fc5-9e46-40c77930b2da\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.504683 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-serving-cert\") pod \"controller-manager-879f6c89f-px2s8\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.505848 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-k4rff"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.507846 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v5kfz"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.509247 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-mfmhf"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.510188 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mfmhf" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.510987 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8922ebf0-c10f-4963-b517-e51ba2284e99-serving-cert\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.511882 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-npnfg"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.513599 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.515662 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/901784d6-cd46-4181-a06f-f88c49faac0e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7qhjt\" (UID: \"901784d6-cd46-4181-a06f-f88c49faac0e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7qhjt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.520474 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4p68r"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.522664 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.525851 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4t6ks"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.528535 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.528583 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.529792 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bgcv6"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.530960 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-v5sqv"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.531816 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.532762 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.532906 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.534219 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hgz5t"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.535359 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-nrn8v"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.539452 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-mfmhf"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.539486 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.539498 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fgmnt"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.539648 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nrn8v" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.542108 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5dlg7"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.542798 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kkgzt"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.544126 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.546174 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.546221 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pn7q"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.547554 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6bxlw"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.548501 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m95d6"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.550143 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j58b4"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.551876 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.552146 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b86ql"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.553453 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nrn8v"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.554986 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.557928 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-brwlx"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.559551 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-txphr"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.561186 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-txphr" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.561390 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gs5md"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.563879 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.574507 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gs5md"] Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.575020 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.591742 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594289 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8b78a73-2ae1-4eed-8110-943a6ae6fe04-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v5kfz\" (UID: \"a8b78a73-2ae1-4eed-8110-943a6ae6fe04\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v5kfz" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594338 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/76a78a93-c264-4c12-b89a-b265e6731c7e-images\") pod \"machine-config-operator-74547568cd-6tz69\" (UID: \"76a78a93-c264-4c12-b89a-b265e6731c7e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594369 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4ftk\" (UniqueName: \"kubernetes.io/projected/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-kube-api-access-m4ftk\") pod \"route-controller-manager-6576b87f9c-t4q8n\" (UID: \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594392 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594423 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc-tmpfs\") pod \"packageserver-d55dfcdfc-9zpxh\" (UID: \"f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594452 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-config\") pod \"route-controller-manager-6576b87f9c-t4q8n\" (UID: \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594471 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-client-ca\") pod \"route-controller-manager-6576b87f9c-t4q8n\" (UID: \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594503 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594529 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ftlt\" (UniqueName: \"kubernetes.io/projected/502a480e-94d6-425b-923c-ef29c26c09a2-kube-api-access-4ftlt\") pod \"service-ca-operator-777779d784-4p68r\" (UID: \"502a480e-94d6-425b-923c-ef29c26c09a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4p68r" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594554 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6tsq\" (UniqueName: \"kubernetes.io/projected/5d54f896-380b-4253-bcba-8576673c606e-kube-api-access-l6tsq\") pod \"olm-operator-6b444d44fb-9f2tz\" (UID: \"5d54f896-380b-4253-bcba-8576673c606e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594577 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594613 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594641 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mktrn\" (UniqueName: \"kubernetes.io/projected/d151f3f9-526d-40a2-8021-13aa609295b1-kube-api-access-mktrn\") pod \"multus-admission-controller-857f4d67dd-hgz5t\" (UID: \"d151f3f9-526d-40a2-8021-13aa609295b1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hgz5t" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594732 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm6zq\" (UniqueName: \"kubernetes.io/projected/e7c790ef-ae52-4809-b6f2-088811793867-kube-api-access-wm6zq\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594765 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594792 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76a78a93-c264-4c12-b89a-b265e6731c7e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6tz69\" (UID: \"76a78a93-c264-4c12-b89a-b265e6731c7e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594818 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594838 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr856\" (UniqueName: \"kubernetes.io/projected/76a78a93-c264-4c12-b89a-b265e6731c7e-kube-api-access-rr856\") pod \"machine-config-operator-74547568cd-6tz69\" (UID: \"76a78a93-c264-4c12-b89a-b265e6731c7e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594857 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmlgg\" (UniqueName: \"kubernetes.io/projected/5feff7d0-1eb1-42d4-8891-6758fdbcb01f-kube-api-access-bmlgg\") pod \"migrator-59844c95c7-jqxjt\" (UID: \"5feff7d0-1eb1-42d4-8891-6758fdbcb01f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jqxjt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594876 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/502a480e-94d6-425b-923c-ef29c26c09a2-config\") pod \"service-ca-operator-777779d784-4p68r\" (UID: \"502a480e-94d6-425b-923c-ef29c26c09a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4p68r" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594921 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594947 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghpp5\" (UniqueName: \"kubernetes.io/projected/f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc-kube-api-access-ghpp5\") pod \"packageserver-d55dfcdfc-9zpxh\" (UID: \"f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594974 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.594991 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d151f3f9-526d-40a2-8021-13aa609295b1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hgz5t\" (UID: \"d151f3f9-526d-40a2-8021-13aa609295b1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hgz5t" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.595009 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.595039 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.595057 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5d54f896-380b-4253-bcba-8576673c606e-srv-cert\") pod \"olm-operator-6b444d44fb-9f2tz\" (UID: \"5d54f896-380b-4253-bcba-8576673c606e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.595076 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5d54f896-380b-4253-bcba-8576673c606e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-9f2tz\" (UID: \"5d54f896-380b-4253-bcba-8576673c606e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.595102 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8b78a73-2ae1-4eed-8110-943a6ae6fe04-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v5kfz\" (UID: \"a8b78a73-2ae1-4eed-8110-943a6ae6fe04\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v5kfz" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.595118 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.595141 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dskh\" (UniqueName: \"kubernetes.io/projected/a8b78a73-2ae1-4eed-8110-943a6ae6fe04-kube-api-access-8dskh\") pod \"openshift-controller-manager-operator-756b6f6bc6-v5kfz\" (UID: \"a8b78a73-2ae1-4eed-8110-943a6ae6fe04\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v5kfz" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.595165 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-serving-cert\") pod \"route-controller-manager-6576b87f9c-t4q8n\" (UID: \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.595188 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/502a480e-94d6-425b-923c-ef29c26c09a2-serving-cert\") pod \"service-ca-operator-777779d784-4p68r\" (UID: \"502a480e-94d6-425b-923c-ef29c26c09a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4p68r" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.595226 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76a78a93-c264-4c12-b89a-b265e6731c7e-proxy-tls\") pod \"machine-config-operator-74547568cd-6tz69\" (UID: \"76a78a93-c264-4c12-b89a-b265e6731c7e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.595251 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc-webhook-cert\") pod \"packageserver-d55dfcdfc-9zpxh\" (UID: \"f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.595280 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7c790ef-ae52-4809-b6f2-088811793867-audit-dir\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.595782 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc-apiservice-cert\") pod \"packageserver-d55dfcdfc-9zpxh\" (UID: \"f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.595815 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-audit-policies\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.596071 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc-tmpfs\") pod \"packageserver-d55dfcdfc-9zpxh\" (UID: \"f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.596128 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.596518 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-audit-policies\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.596843 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76a78a93-c264-4c12-b89a-b265e6731c7e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6tz69\" (UID: \"76a78a93-c264-4c12-b89a-b265e6731c7e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.596886 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.596925 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8b78a73-2ae1-4eed-8110-943a6ae6fe04-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v5kfz\" (UID: \"a8b78a73-2ae1-4eed-8110-943a6ae6fe04\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v5kfz" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.597032 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.597240 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7c790ef-ae52-4809-b6f2-088811793867-audit-dir\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.597710 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.599019 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.600168 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.600185 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.600449 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.600827 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8b78a73-2ae1-4eed-8110-943a6ae6fe04-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v5kfz\" (UID: \"a8b78a73-2ae1-4eed-8110-943a6ae6fe04\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v5kfz" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.600860 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.601759 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.601938 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.611766 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.631889 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.644783 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5d54f896-380b-4253-bcba-8576673c606e-srv-cert\") pod \"olm-operator-6b444d44fb-9f2tz\" (UID: \"5d54f896-380b-4253-bcba-8576673c606e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.652869 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.671615 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.692945 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.711908 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.731404 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.751708 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.771774 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.794392 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.812341 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.821325 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/502a480e-94d6-425b-923c-ef29c26c09a2-serving-cert\") pod \"service-ca-operator-777779d784-4p68r\" (UID: \"502a480e-94d6-425b-923c-ef29c26c09a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4p68r" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.833039 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.852226 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.857620 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/502a480e-94d6-425b-923c-ef29c26c09a2-config\") pod \"service-ca-operator-777779d784-4p68r\" (UID: \"502a480e-94d6-425b-923c-ef29c26c09a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4p68r" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.872302 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.892444 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.911988 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.932521 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.951802 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.972737 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.982972 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76a78a93-c264-4c12-b89a-b265e6731c7e-proxy-tls\") pod \"machine-config-operator-74547568cd-6tz69\" (UID: \"76a78a93-c264-4c12-b89a-b265e6731c7e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69" Jan 30 21:45:49 crc kubenswrapper[4869]: I0130 21:45:49.993710 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.011939 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.032493 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.051820 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.072950 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.092777 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.114120 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.132136 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.152507 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.161865 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-serving-cert\") pod \"route-controller-manager-6576b87f9c-t4q8n\" (UID: \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.172627 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.176523 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-config\") pod \"route-controller-manager-6576b87f9c-t4q8n\" (UID: \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.192119 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.211564 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.232361 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.241919 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc-webhook-cert\") pod \"packageserver-d55dfcdfc-9zpxh\" (UID: \"f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.242082 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc-apiservice-cert\") pod \"packageserver-d55dfcdfc-9zpxh\" (UID: \"f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.254195 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.257169 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-client-ca\") pod \"route-controller-manager-6576b87f9c-t4q8n\" (UID: \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.271882 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.292956 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.312131 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.315675 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/76a78a93-c264-4c12-b89a-b265e6731c7e-images\") pod \"machine-config-operator-74547568cd-6tz69\" (UID: \"76a78a93-c264-4c12-b89a-b265e6731c7e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.332251 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.353636 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.358969 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5d54f896-380b-4253-bcba-8576673c606e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-9f2tz\" (UID: \"5d54f896-380b-4253-bcba-8576673c606e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.372652 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.380954 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d151f3f9-526d-40a2-8021-13aa609295b1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hgz5t\" (UID: \"d151f3f9-526d-40a2-8021-13aa609295b1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hgz5t" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.412107 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.430193 4869 request.go:700] Waited for 1.003403509s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&limit=500&resourceVersion=0 Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.432215 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.452340 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.477230 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.491417 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.512440 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.551881 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.572400 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.591457 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.614307 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.632433 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.652310 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.672511 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.692511 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.711994 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.733198 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.753136 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.773050 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.792223 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.811536 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.832521 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.852177 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.872198 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.892728 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.929206 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgdbf\" (UniqueName: \"kubernetes.io/projected/8922ebf0-c10f-4963-b517-e51ba2284e99-kube-api-access-wgdbf\") pod \"apiserver-76f77b778f-4kznw\" (UID: \"8922ebf0-c10f-4963-b517-e51ba2284e99\") " pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.954503 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2h47\" (UniqueName: \"kubernetes.io/projected/1f418006-5265-449a-8e91-64144e311a6b-kube-api-access-q2h47\") pod \"openshift-config-operator-7777fb866f-cjzmv\" (UID: \"1f418006-5265-449a-8e91-64144e311a6b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv" Jan 30 21:45:50 crc kubenswrapper[4869]: I0130 21:45:50.967831 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx4hs\" (UniqueName: \"kubernetes.io/projected/c1f0b262-4d72-49a2-aa45-918fbc89a9f2-kube-api-access-wx4hs\") pod \"machine-api-operator-5694c8668f-npnfg\" (UID: \"c1f0b262-4d72-49a2-aa45-918fbc89a9f2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-npnfg" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.011362 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86ggd\" (UniqueName: \"kubernetes.io/projected/901784d6-cd46-4181-a06f-f88c49faac0e-kube-api-access-86ggd\") pod \"cluster-samples-operator-665b6dd947-7qhjt\" (UID: \"901784d6-cd46-4181-a06f-f88c49faac0e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7qhjt" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.013085 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7qhjt" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.031055 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6kn8\" (UniqueName: \"kubernetes.io/projected/5894a987-705c-4bf2-a99f-ed766e5618db-kube-api-access-k6kn8\") pod \"openshift-apiserver-operator-796bbdcf4f-6jqjl\" (UID: \"5894a987-705c-4bf2-a99f-ed766e5618db\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6jqjl" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.039723 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6jqjl" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.048646 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7tlh\" (UniqueName: \"kubernetes.io/projected/98f93152-3943-4bb2-ac4b-d2d79286e19d-kube-api-access-c7tlh\") pod \"authentication-operator-69f744f599-565td\" (UID: \"98f93152-3943-4bb2-ac4b-d2d79286e19d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.065856 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.076162 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx5xq\" (UniqueName: \"kubernetes.io/projected/f1da8dd6-5d73-4fc5-9e46-40c77930b2da-kube-api-access-mx5xq\") pod \"cluster-image-registry-operator-dc59b4c8b-rkj8m\" (UID: \"f1da8dd6-5d73-4fc5-9e46-40c77930b2da\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.110770 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dphxl\" (UniqueName: \"kubernetes.io/projected/c7584681-881d-465c-b4e8-121404518807-kube-api-access-dphxl\") pod \"console-operator-58897d9998-8brpl\" (UID: \"c7584681-881d-465c-b4e8-121404518807\") " pod="openshift-console-operator/console-operator-58897d9998-8brpl" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.111499 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwchr\" (UniqueName: \"kubernetes.io/projected/c31c9863-59b6-490a-95d1-53d4a9707117-kube-api-access-vwchr\") pod \"console-f9d7485db-mq7xx\" (UID: \"c31c9863-59b6-490a-95d1-53d4a9707117\") " pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.115859 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.132534 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl5qw\" (UniqueName: \"kubernetes.io/projected/d8bca2a0-12fd-48ab-9507-ca0824a394cb-kube-api-access-pl5qw\") pod \"machine-approver-56656f9798-2wk5x\" (UID: \"d8bca2a0-12fd-48ab-9507-ca0824a394cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.134652 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.149300 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-npnfg" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.153454 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.155421 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1da8dd6-5d73-4fc5-9e46-40c77930b2da-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rkj8m\" (UID: \"f1da8dd6-5d73-4fc5-9e46-40c77930b2da\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.159800 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.173337 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.181513 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.193783 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.213543 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.234829 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.235938 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7qhjt"] Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.252307 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.267492 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8brpl" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.271852 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.285953 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6jqjl"] Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.292566 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.312386 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.355159 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lpqp\" (UniqueName: \"kubernetes.io/projected/49697d03-35f3-4fa3-9141-2bb8ae8eccab-kube-api-access-8lpqp\") pod \"downloads-7954f5f757-k4rff\" (UID: \"49697d03-35f3-4fa3-9141-2bb8ae8eccab\") " pod="openshift-console/downloads-7954f5f757-k4rff" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.356597 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.370399 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jcv4\" (UniqueName: \"kubernetes.io/projected/400bd72e-e645-473e-b285-497d96f567ac-kube-api-access-6jcv4\") pod \"dns-operator-744455d44c-kgmf9\" (UID: \"400bd72e-e645-473e-b285-497d96f567ac\") " pod="openshift-dns-operator/dns-operator-744455d44c-kgmf9" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.385463 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv"] Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.390844 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wm5m\" (UniqueName: \"kubernetes.io/projected/6d91a880-3f38-492b-b797-6fc24c2da65e-kube-api-access-7wm5m\") pod \"apiserver-7bbb656c7d-w587f\" (UID: \"6d91a880-3f38-492b-b797-6fc24c2da65e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.392263 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.413466 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.424875 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-npnfg"] Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.427729 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4kznw"] Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.430792 4869 request.go:700] Waited for 1.920165409s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0 Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.434150 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.453281 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 21:45:51 crc kubenswrapper[4869]: W0130 21:45:51.470239 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8922ebf0_c10f_4963_b517_e51ba2284e99.slice/crio-ed6ca68cc340c149925c8435f59d9e0879bbea54a1795b85c52e0a612b853c7c WatchSource:0}: Error finding container ed6ca68cc340c149925c8435f59d9e0879bbea54a1795b85c52e0a612b853c7c: Status 404 returned error can't find the container with id ed6ca68cc340c149925c8435f59d9e0879bbea54a1795b85c52e0a612b853c7c Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.472136 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.493475 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.511068 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-565td"] Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.512559 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.531636 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.551812 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.571819 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.593227 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.596996 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mq7xx"] Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.612856 4869 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.624814 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8brpl"] Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.630500 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-kgmf9" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.632168 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.634714 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv" event={"ID":"1f418006-5265-449a-8e91-64144e311a6b","Type":"ContainerStarted","Data":"0f1c9547aadf26c10f1b996757f2d7a0751fe307144685037f16d3d9801cf2de"} Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.634752 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv" event={"ID":"1f418006-5265-449a-8e91-64144e311a6b","Type":"ContainerStarted","Data":"92cf0afe6c5040ba541f9f45e8d6dd4fbb529caaffd28ff4cf3c14e1cef5bad2"} Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.638140 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-npnfg" event={"ID":"c1f0b262-4d72-49a2-aa45-918fbc89a9f2","Type":"ContainerStarted","Data":"90b4f34b3c7f2447fe7978ab9487e9996bcac672052c6910d54cbfe023fe3eff"} Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.641683 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mq7xx" event={"ID":"c31c9863-59b6-490a-95d1-53d4a9707117","Type":"ContainerStarted","Data":"e36593b9b64365c76dac2e06ab2f691958c9c035d0d3f24b67f98650fe7097df"} Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.645578 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-k4rff" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.646012 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4kznw" event={"ID":"8922ebf0-c10f-4963-b517-e51ba2284e99","Type":"ContainerStarted","Data":"ed6ca68cc340c149925c8435f59d9e0879bbea54a1795b85c52e0a612b853c7c"} Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.656527 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6jqjl" event={"ID":"5894a987-705c-4bf2-a99f-ed766e5618db","Type":"ContainerStarted","Data":"1ad1928fa212a69a51cc04da7a6c41d804994b35dce25b6ff9227be99f3270f5"} Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.656591 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6jqjl" event={"ID":"5894a987-705c-4bf2-a99f-ed766e5618db","Type":"ContainerStarted","Data":"514e1dfdac6c1d95615612169dba960f8f6793ce3060279d3e5ff1aca5a4228f"} Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.659661 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7qhjt" event={"ID":"901784d6-cd46-4181-a06f-f88c49faac0e","Type":"ContainerStarted","Data":"4037d2126c70106aecb85341038fca923d5bf0ba075c9fe340a6bc7d18712f90"} Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.661374 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x" event={"ID":"d8bca2a0-12fd-48ab-9507-ca0824a394cb","Type":"ContainerStarted","Data":"c7492ac1cd2504054f6adda249640cc9f9002fcbe3c98c0f70c253767e2eb74a"} Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.661401 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x" event={"ID":"d8bca2a0-12fd-48ab-9507-ca0824a394cb","Type":"ContainerStarted","Data":"71fd472c5f2ddb6a8b0e5649102960b7ff67566c277cda973ff2a0cf8d1b14e3"} Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.662607 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" event={"ID":"98f93152-3943-4bb2-ac4b-d2d79286e19d","Type":"ContainerStarted","Data":"fc7917b82e212f9044fecbda5007cfa15e685e272450a1d0227b215973983c17"} Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.669375 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4ftk\" (UniqueName: \"kubernetes.io/projected/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-kube-api-access-m4ftk\") pod \"route-controller-manager-6576b87f9c-t4q8n\" (UID: \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.677206 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.683483 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m"] Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.686307 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr856\" (UniqueName: \"kubernetes.io/projected/76a78a93-c264-4c12-b89a-b265e6731c7e-kube-api-access-rr856\") pod \"machine-config-operator-74547568cd-6tz69\" (UID: \"76a78a93-c264-4c12-b89a-b265e6731c7e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.707935 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmlgg\" (UniqueName: \"kubernetes.io/projected/5feff7d0-1eb1-42d4-8891-6758fdbcb01f-kube-api-access-bmlgg\") pod \"migrator-59844c95c7-jqxjt\" (UID: \"5feff7d0-1eb1-42d4-8891-6758fdbcb01f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jqxjt" Jan 30 21:45:51 crc kubenswrapper[4869]: W0130 21:45:51.711265 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1da8dd6_5d73_4fc5_9e46_40c77930b2da.slice/crio-b6d78228b8ea286fa91088ad5ecbffa0ea26379194b603beac7cca6f01e7bfb2 WatchSource:0}: Error finding container b6d78228b8ea286fa91088ad5ecbffa0ea26379194b603beac7cca6f01e7bfb2: Status 404 returned error can't find the container with id b6d78228b8ea286fa91088ad5ecbffa0ea26379194b603beac7cca6f01e7bfb2 Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.749615 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.749924 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mktrn\" (UniqueName: \"kubernetes.io/projected/d151f3f9-526d-40a2-8021-13aa609295b1-kube-api-access-mktrn\") pod \"multus-admission-controller-857f4d67dd-hgz5t\" (UID: \"d151f3f9-526d-40a2-8021-13aa609295b1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hgz5t" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.758218 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jqxjt" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.766492 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ftlt\" (UniqueName: \"kubernetes.io/projected/502a480e-94d6-425b-923c-ef29c26c09a2-kube-api-access-4ftlt\") pod \"service-ca-operator-777779d784-4p68r\" (UID: \"502a480e-94d6-425b-923c-ef29c26c09a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4p68r" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.781593 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm6zq\" (UniqueName: \"kubernetes.io/projected/e7c790ef-ae52-4809-b6f2-088811793867-kube-api-access-wm6zq\") pod \"oauth-openshift-558db77b4-fgmnt\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.794039 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6tsq\" (UniqueName: \"kubernetes.io/projected/5d54f896-380b-4253-bcba-8576673c606e-kube-api-access-l6tsq\") pod \"olm-operator-6b444d44fb-9f2tz\" (UID: \"5d54f896-380b-4253-bcba-8576673c606e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.800245 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.812581 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.819711 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghpp5\" (UniqueName: \"kubernetes.io/projected/f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc-kube-api-access-ghpp5\") pod \"packageserver-d55dfcdfc-9zpxh\" (UID: \"f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.824247 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4p68r" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.833529 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.836213 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dskh\" (UniqueName: \"kubernetes.io/projected/a8b78a73-2ae1-4eed-8110-943a6ae6fe04-kube-api-access-8dskh\") pod \"openshift-controller-manager-operator-756b6f6bc6-v5kfz\" (UID: \"a8b78a73-2ae1-4eed-8110-943a6ae6fe04\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v5kfz" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.845161 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgw2t\" (UniqueName: \"kubernetes.io/projected/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-kube-api-access-sgw2t\") pod \"controller-manager-879f6c89f-px2s8\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.928618 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kgmf9"] Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933443 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/98f852c4-a74a-4153-a095-136d3ef7d5c2-profile-collector-cert\") pod \"catalog-operator-68c6474976-q4k7p\" (UID: \"98f852c4-a74a-4153-a095-136d3ef7d5c2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933498 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/77070268-41bc-4f03-bddd-5470d23e03b8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4pn7q\" (UID: \"77070268-41bc-4f03-bddd-5470d23e03b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pn7q" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933523 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cff2bdad-4c6e-44bc-977a-376e09638df1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933551 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1bb8d75c-15d1-495d-a2ec-4c4e873d45e3-metrics-tls\") pod \"ingress-operator-5b745b69d9-m5j8f\" (UID: \"1bb8d75c-15d1-495d-a2ec-4c4e873d45e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933576 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/492fddc8-5b29-4b32-9b4c-9831317fae23-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6bxlw\" (UID: \"492fddc8-5b29-4b32-9b4c-9831317fae23\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6bxlw" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933594 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f8d08a66-aeee-4eba-8436-4b124c45051a-etcd-client\") pod \"etcd-operator-b45778765-bgcv6\" (UID: \"f8d08a66-aeee-4eba-8436-4b124c45051a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933614 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98f852c4-a74a-4153-a095-136d3ef7d5c2-srv-cert\") pod \"catalog-operator-68c6474976-q4k7p\" (UID: \"98f852c4-a74a-4153-a095-136d3ef7d5c2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933632 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cff2bdad-4c6e-44bc-977a-376e09638df1-bound-sa-token\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933648 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8d08a66-aeee-4eba-8436-4b124c45051a-etcd-service-ca\") pod \"etcd-operator-b45778765-bgcv6\" (UID: \"f8d08a66-aeee-4eba-8436-4b124c45051a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933681 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsq5r\" (UniqueName: \"kubernetes.io/projected/98f852c4-a74a-4153-a095-136d3ef7d5c2-kube-api-access-tsq5r\") pod \"catalog-operator-68c6474976-q4k7p\" (UID: \"98f852c4-a74a-4153-a095-136d3ef7d5c2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933708 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8d08a66-aeee-4eba-8436-4b124c45051a-config\") pod \"etcd-operator-b45778765-bgcv6\" (UID: \"f8d08a66-aeee-4eba-8436-4b124c45051a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933755 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933789 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h85h\" (UniqueName: \"kubernetes.io/projected/77070268-41bc-4f03-bddd-5470d23e03b8-kube-api-access-6h85h\") pod \"package-server-manager-789f6589d5-4pn7q\" (UID: \"77070268-41bc-4f03-bddd-5470d23e03b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pn7q" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933808 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1bb8d75c-15d1-495d-a2ec-4c4e873d45e3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m5j8f\" (UID: \"1bb8d75c-15d1-495d-a2ec-4c4e873d45e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933843 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8d08a66-aeee-4eba-8436-4b124c45051a-serving-cert\") pod \"etcd-operator-b45778765-bgcv6\" (UID: \"f8d08a66-aeee-4eba-8436-4b124c45051a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933874 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nzpq\" (UniqueName: \"kubernetes.io/projected/8b3074a3-1e32-4024-bcd1-0a7365f9b92a-kube-api-access-5nzpq\") pod \"kube-storage-version-migrator-operator-b67b599dd-4t6ks\" (UID: \"8b3074a3-1e32-4024-bcd1-0a7365f9b92a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4t6ks" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933913 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f8d08a66-aeee-4eba-8436-4b124c45051a-etcd-ca\") pod \"etcd-operator-b45778765-bgcv6\" (UID: \"f8d08a66-aeee-4eba-8436-4b124c45051a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933936 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3074a3-1e32-4024-bcd1-0a7365f9b92a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4t6ks\" (UID: \"8b3074a3-1e32-4024-bcd1-0a7365f9b92a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4t6ks" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.933986 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cff2bdad-4c6e-44bc-977a-376e09638df1-trusted-ca\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.934002 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtlhv\" (UniqueName: \"kubernetes.io/projected/cff2bdad-4c6e-44bc-977a-376e09638df1-kube-api-access-wtlhv\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.934030 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cff2bdad-4c6e-44bc-977a-376e09638df1-registry-tls\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.934051 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wjjv\" (UniqueName: \"kubernetes.io/projected/f8d08a66-aeee-4eba-8436-4b124c45051a-kube-api-access-5wjjv\") pod \"etcd-operator-b45778765-bgcv6\" (UID: \"f8d08a66-aeee-4eba-8436-4b124c45051a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.934067 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cff2bdad-4c6e-44bc-977a-376e09638df1-registry-certificates\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.934086 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvv6h\" (UniqueName: \"kubernetes.io/projected/1bb8d75c-15d1-495d-a2ec-4c4e873d45e3-kube-api-access-cvv6h\") pod \"ingress-operator-5b745b69d9-m5j8f\" (UID: \"1bb8d75c-15d1-495d-a2ec-4c4e873d45e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.934137 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cff2bdad-4c6e-44bc-977a-376e09638df1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.934155 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkb7s\" (UniqueName: \"kubernetes.io/projected/492fddc8-5b29-4b32-9b4c-9831317fae23-kube-api-access-wkb7s\") pod \"control-plane-machine-set-operator-78cbb6b69f-6bxlw\" (UID: \"492fddc8-5b29-4b32-9b4c-9831317fae23\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6bxlw" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.934181 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bb8d75c-15d1-495d-a2ec-4c4e873d45e3-trusted-ca\") pod \"ingress-operator-5b745b69d9-m5j8f\" (UID: \"1bb8d75c-15d1-495d-a2ec-4c4e873d45e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.934231 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b3074a3-1e32-4024-bcd1-0a7365f9b92a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4t6ks\" (UID: \"8b3074a3-1e32-4024-bcd1-0a7365f9b92a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4t6ks" Jan 30 21:45:51 crc kubenswrapper[4869]: E0130 21:45:51.936806 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:52.436783815 +0000 UTC m=+153.322541840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.968706 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:45:51 crc kubenswrapper[4869]: W0130 21:45:51.974009 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod400bd72e_e645_473e_b285_497d96f567ac.slice/crio-9db05dde20baeb8ab45692ff2477da3d311ad402e328fcfc9bd77a4e5bd069d2 WatchSource:0}: Error finding container 9db05dde20baeb8ab45692ff2477da3d311ad402e328fcfc9bd77a4e5bd069d2: Status 404 returned error can't find the container with id 9db05dde20baeb8ab45692ff2477da3d311ad402e328fcfc9bd77a4e5bd069d2 Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.976595 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-k4rff"] Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.986048 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:51 crc kubenswrapper[4869]: I0130 21:45:51.993675 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v5kfz" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.008421 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" Jan 30 21:45:52 crc kubenswrapper[4869]: W0130 21:45:52.032572 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49697d03_35f3_4fa3_9141_2bb8ae8eccab.slice/crio-45ef9a833a99975f3cbf0884fe48886a657e095798e2b1758adb354707782d79 WatchSource:0}: Error finding container 45ef9a833a99975f3cbf0884fe48886a657e095798e2b1758adb354707782d79: Status 404 returned error can't find the container with id 45ef9a833a99975f3cbf0884fe48886a657e095798e2b1758adb354707782d79 Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.035939 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:52 crc kubenswrapper[4869]: E0130 21:45:52.036167 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:52.536106166 +0000 UTC m=+153.421864191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036239 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsq5r\" (UniqueName: \"kubernetes.io/projected/98f852c4-a74a-4153-a095-136d3ef7d5c2-kube-api-access-tsq5r\") pod \"catalog-operator-68c6474976-q4k7p\" (UID: \"98f852c4-a74a-4153-a095-136d3ef7d5c2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036286 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8d08a66-aeee-4eba-8436-4b124c45051a-config\") pod \"etcd-operator-b45778765-bgcv6\" (UID: \"f8d08a66-aeee-4eba-8436-4b124c45051a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036311 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99023de1-0d21-49eb-b133-1403f9224808-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-m95d6\" (UID: \"99023de1-0d21-49eb-b133-1403f9224808\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m95d6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036379 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71313c5e-2597-4c09-89c3-02a17afaeda5-config\") pod \"kube-controller-manager-operator-78b949d7b-5dlg7\" (UID: \"71313c5e-2597-4c09-89c3-02a17afaeda5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5dlg7" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036409 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036439 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h85h\" (UniqueName: \"kubernetes.io/projected/77070268-41bc-4f03-bddd-5470d23e03b8-kube-api-access-6h85h\") pod \"package-server-manager-789f6589d5-4pn7q\" (UID: \"77070268-41bc-4f03-bddd-5470d23e03b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pn7q" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036456 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1bb8d75c-15d1-495d-a2ec-4c4e873d45e3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m5j8f\" (UID: \"1bb8d75c-15d1-495d-a2ec-4c4e873d45e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036493 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8d08a66-aeee-4eba-8436-4b124c45051a-serving-cert\") pod \"etcd-operator-b45778765-bgcv6\" (UID: \"f8d08a66-aeee-4eba-8436-4b124c45051a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036550 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f2aeffca-8872-4e12-8651-4fa2fe16be8e-csi-data-dir\") pod \"csi-hostpathplugin-gs5md\" (UID: \"f2aeffca-8872-4e12-8651-4fa2fe16be8e\") " pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036568 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7tl2\" (UniqueName: \"kubernetes.io/projected/f2aeffca-8872-4e12-8651-4fa2fe16be8e-kube-api-access-b7tl2\") pod \"csi-hostpathplugin-gs5md\" (UID: \"f2aeffca-8872-4e12-8651-4fa2fe16be8e\") " pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036588 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nzpq\" (UniqueName: \"kubernetes.io/projected/8b3074a3-1e32-4024-bcd1-0a7365f9b92a-kube-api-access-5nzpq\") pod \"kube-storage-version-migrator-operator-b67b599dd-4t6ks\" (UID: \"8b3074a3-1e32-4024-bcd1-0a7365f9b92a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4t6ks" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036621 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f8d08a66-aeee-4eba-8436-4b124c45051a-etcd-ca\") pod \"etcd-operator-b45778765-bgcv6\" (UID: \"f8d08a66-aeee-4eba-8436-4b124c45051a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036653 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b8569fbd-38cb-48af-ae0f-44e0e5df35ce-certs\") pod \"machine-config-server-txphr\" (UID: \"b8569fbd-38cb-48af-ae0f-44e0e5df35ce\") " pod="openshift-machine-config-operator/machine-config-server-txphr" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036673 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmvfs\" (UniqueName: \"kubernetes.io/projected/b8569fbd-38cb-48af-ae0f-44e0e5df35ce-kube-api-access-mmvfs\") pod \"machine-config-server-txphr\" (UID: \"b8569fbd-38cb-48af-ae0f-44e0e5df35ce\") " pod="openshift-machine-config-operator/machine-config-server-txphr" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036708 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/636fcb63-d2ed-4d3a-84c8-caf785e27f12-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kkgzt\" (UID: \"636fcb63-d2ed-4d3a-84c8-caf785e27f12\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kkgzt" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036743 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3074a3-1e32-4024-bcd1-0a7365f9b92a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4t6ks\" (UID: \"8b3074a3-1e32-4024-bcd1-0a7365f9b92a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4t6ks" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036824 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed1ad681-31ef-4081-9b80-acae8a1e58ac-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-brwlx\" (UID: \"ed1ad681-31ef-4081-9b80-acae8a1e58ac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brwlx" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036873 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ctws\" (UniqueName: \"kubernetes.io/projected/ed1ad681-31ef-4081-9b80-acae8a1e58ac-kube-api-access-2ctws\") pod \"machine-config-controller-84d6567774-brwlx\" (UID: \"ed1ad681-31ef-4081-9b80-acae8a1e58ac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brwlx" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036892 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9e62b6-dc56-4339-9dd7-0d71b8df4053-config-volume\") pod \"dns-default-nrn8v\" (UID: \"3f9e62b6-dc56-4339-9dd7-0d71b8df4053\") " pod="openshift-dns/dns-default-nrn8v" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036940 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3764a402-d82c-498f-91ab-91f657d352c6-metrics-certs\") pod \"router-default-5444994796-2zzvw\" (UID: \"3764a402-d82c-498f-91ab-91f657d352c6\") " pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.036984 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f2aeffca-8872-4e12-8651-4fa2fe16be8e-plugins-dir\") pod \"csi-hostpathplugin-gs5md\" (UID: \"f2aeffca-8872-4e12-8651-4fa2fe16be8e\") " pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.037026 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cff2bdad-4c6e-44bc-977a-376e09638df1-trusted-ca\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.037046 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtlhv\" (UniqueName: \"kubernetes.io/projected/cff2bdad-4c6e-44bc-977a-376e09638df1-kube-api-access-wtlhv\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.037067 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43f6cf85-1016-4b61-b95a-9ad7c66bb29f-secret-volume\") pod \"collect-profiles-29496825-lf7bw\" (UID: \"43f6cf85-1016-4b61-b95a-9ad7c66bb29f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.037087 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b8569fbd-38cb-48af-ae0f-44e0e5df35ce-node-bootstrap-token\") pod \"machine-config-server-txphr\" (UID: \"b8569fbd-38cb-48af-ae0f-44e0e5df35ce\") " pod="openshift-machine-config-operator/machine-config-server-txphr" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.037107 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zf5j\" (UniqueName: \"kubernetes.io/projected/9b85629c-9742-4a56-b91a-601938afd139-kube-api-access-2zf5j\") pod \"ingress-canary-mfmhf\" (UID: \"9b85629c-9742-4a56-b91a-601938afd139\") " pod="openshift-ingress-canary/ingress-canary-mfmhf" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.038481 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/636fcb63-d2ed-4d3a-84c8-caf785e27f12-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kkgzt\" (UID: \"636fcb63-d2ed-4d3a-84c8-caf785e27f12\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kkgzt" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.038535 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/215f240b-da02-40a5-b1f4-c5cfe17407b6-signing-cabundle\") pod \"service-ca-9c57cc56f-v5sqv\" (UID: \"215f240b-da02-40a5-b1f4-c5cfe17407b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-v5sqv" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.038596 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cff2bdad-4c6e-44bc-977a-376e09638df1-registry-tls\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.038647 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s9ht\" (UniqueName: \"kubernetes.io/projected/3f9e62b6-dc56-4339-9dd7-0d71b8df4053-kube-api-access-7s9ht\") pod \"dns-default-nrn8v\" (UID: \"3f9e62b6-dc56-4339-9dd7-0d71b8df4053\") " pod="openshift-dns/dns-default-nrn8v" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.038676 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3764a402-d82c-498f-91ab-91f657d352c6-default-certificate\") pod \"router-default-5444994796-2zzvw\" (UID: \"3764a402-d82c-498f-91ab-91f657d352c6\") " pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.038702 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f2aeffca-8872-4e12-8651-4fa2fe16be8e-registration-dir\") pod \"csi-hostpathplugin-gs5md\" (UID: \"f2aeffca-8872-4e12-8651-4fa2fe16be8e\") " pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.038735 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wjjv\" (UniqueName: \"kubernetes.io/projected/f8d08a66-aeee-4eba-8436-4b124c45051a-kube-api-access-5wjjv\") pod \"etcd-operator-b45778765-bgcv6\" (UID: \"f8d08a66-aeee-4eba-8436-4b124c45051a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.038759 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed1ad681-31ef-4081-9b80-acae8a1e58ac-proxy-tls\") pod \"machine-config-controller-84d6567774-brwlx\" (UID: \"ed1ad681-31ef-4081-9b80-acae8a1e58ac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brwlx" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.038760 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8d08a66-aeee-4eba-8436-4b124c45051a-config\") pod \"etcd-operator-b45778765-bgcv6\" (UID: \"f8d08a66-aeee-4eba-8436-4b124c45051a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.038823 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99023de1-0d21-49eb-b133-1403f9224808-config\") pod \"kube-apiserver-operator-766d6c64bb-m95d6\" (UID: \"99023de1-0d21-49eb-b133-1403f9224808\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m95d6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.038890 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f8d08a66-aeee-4eba-8436-4b124c45051a-etcd-ca\") pod \"etcd-operator-b45778765-bgcv6\" (UID: \"f8d08a66-aeee-4eba-8436-4b124c45051a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.039268 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2kxz\" (UniqueName: \"kubernetes.io/projected/215f240b-da02-40a5-b1f4-c5cfe17407b6-kube-api-access-t2kxz\") pod \"service-ca-9c57cc56f-v5sqv\" (UID: \"215f240b-da02-40a5-b1f4-c5cfe17407b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-v5sqv" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.039299 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cff2bdad-4c6e-44bc-977a-376e09638df1-registry-certificates\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.039352 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tlts\" (UniqueName: \"kubernetes.io/projected/43f6cf85-1016-4b61-b95a-9ad7c66bb29f-kube-api-access-6tlts\") pod \"collect-profiles-29496825-lf7bw\" (UID: \"43f6cf85-1016-4b61-b95a-9ad7c66bb29f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.039395 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvv6h\" (UniqueName: \"kubernetes.io/projected/1bb8d75c-15d1-495d-a2ec-4c4e873d45e3-kube-api-access-cvv6h\") pod \"ingress-operator-5b745b69d9-m5j8f\" (UID: \"1bb8d75c-15d1-495d-a2ec-4c4e873d45e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.039413 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3764a402-d82c-498f-91ab-91f657d352c6-stats-auth\") pod \"router-default-5444994796-2zzvw\" (UID: \"3764a402-d82c-498f-91ab-91f657d352c6\") " pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.039462 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71313c5e-2597-4c09-89c3-02a17afaeda5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5dlg7\" (UID: \"71313c5e-2597-4c09-89c3-02a17afaeda5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5dlg7" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.039487 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/215f240b-da02-40a5-b1f4-c5cfe17407b6-signing-key\") pod \"service-ca-9c57cc56f-v5sqv\" (UID: \"215f240b-da02-40a5-b1f4-c5cfe17407b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-v5sqv" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.039526 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99023de1-0d21-49eb-b133-1403f9224808-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-m95d6\" (UID: \"99023de1-0d21-49eb-b133-1403f9224808\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m95d6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.039768 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3074a3-1e32-4024-bcd1-0a7365f9b92a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4t6ks\" (UID: \"8b3074a3-1e32-4024-bcd1-0a7365f9b92a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4t6ks" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.043607 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cff2bdad-4c6e-44bc-977a-376e09638df1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.043646 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkb7s\" (UniqueName: \"kubernetes.io/projected/492fddc8-5b29-4b32-9b4c-9831317fae23-kube-api-access-wkb7s\") pod \"control-plane-machine-set-operator-78cbb6b69f-6bxlw\" (UID: \"492fddc8-5b29-4b32-9b4c-9831317fae23\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6bxlw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.045297 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hgz5t" Jan 30 21:45:52 crc kubenswrapper[4869]: E0130 21:45:52.045450 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:52.545433279 +0000 UTC m=+153.431191304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.046073 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bb8d75c-15d1-495d-a2ec-4c4e873d45e3-trusted-ca\") pod \"ingress-operator-5b745b69d9-m5j8f\" (UID: \"1bb8d75c-15d1-495d-a2ec-4c4e873d45e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.046105 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/636fcb63-d2ed-4d3a-84c8-caf785e27f12-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kkgzt\" (UID: \"636fcb63-d2ed-4d3a-84c8-caf785e27f12\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kkgzt" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.046678 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fca2992a-2cb5-4b86-9cfe-66d8dae76acb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b86ql\" (UID: \"fca2992a-2cb5-4b86-9cfe-66d8dae76acb\") " pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.046888 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71313c5e-2597-4c09-89c3-02a17afaeda5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5dlg7\" (UID: \"71313c5e-2597-4c09-89c3-02a17afaeda5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5dlg7" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.046924 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43f6cf85-1016-4b61-b95a-9ad7c66bb29f-config-volume\") pod \"collect-profiles-29496825-lf7bw\" (UID: \"43f6cf85-1016-4b61-b95a-9ad7c66bb29f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.046998 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fca2992a-2cb5-4b86-9cfe-66d8dae76acb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b86ql\" (UID: \"fca2992a-2cb5-4b86-9cfe-66d8dae76acb\") " pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.047031 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b3074a3-1e32-4024-bcd1-0a7365f9b92a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4t6ks\" (UID: \"8b3074a3-1e32-4024-bcd1-0a7365f9b92a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4t6ks" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.047028 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cff2bdad-4c6e-44bc-977a-376e09638df1-trusted-ca\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.048089 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bh5m\" (UniqueName: \"kubernetes.io/projected/3764a402-d82c-498f-91ab-91f657d352c6-kube-api-access-7bh5m\") pod \"router-default-5444994796-2zzvw\" (UID: \"3764a402-d82c-498f-91ab-91f657d352c6\") " pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.048117 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f"] Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.048500 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/98f852c4-a74a-4153-a095-136d3ef7d5c2-profile-collector-cert\") pod \"catalog-operator-68c6474976-q4k7p\" (UID: \"98f852c4-a74a-4153-a095-136d3ef7d5c2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.049833 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bb8d75c-15d1-495d-a2ec-4c4e873d45e3-trusted-ca\") pod \"ingress-operator-5b745b69d9-m5j8f\" (UID: \"1bb8d75c-15d1-495d-a2ec-4c4e873d45e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.050138 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7f2v\" (UniqueName: \"kubernetes.io/projected/fca2992a-2cb5-4b86-9cfe-66d8dae76acb-kube-api-access-p7f2v\") pod \"marketplace-operator-79b997595-b86ql\" (UID: \"fca2992a-2cb5-4b86-9cfe-66d8dae76acb\") " pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.051001 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/77070268-41bc-4f03-bddd-5470d23e03b8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4pn7q\" (UID: \"77070268-41bc-4f03-bddd-5470d23e03b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pn7q" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.051052 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3f9e62b6-dc56-4339-9dd7-0d71b8df4053-metrics-tls\") pod \"dns-default-nrn8v\" (UID: \"3f9e62b6-dc56-4339-9dd7-0d71b8df4053\") " pod="openshift-dns/dns-default-nrn8v" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.051115 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cff2bdad-4c6e-44bc-977a-376e09638df1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.051136 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b85629c-9742-4a56-b91a-601938afd139-cert\") pod \"ingress-canary-mfmhf\" (UID: \"9b85629c-9742-4a56-b91a-601938afd139\") " pod="openshift-ingress-canary/ingress-canary-mfmhf" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.051175 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f2aeffca-8872-4e12-8651-4fa2fe16be8e-mountpoint-dir\") pod \"csi-hostpathplugin-gs5md\" (UID: \"f2aeffca-8872-4e12-8651-4fa2fe16be8e\") " pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.051223 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1bb8d75c-15d1-495d-a2ec-4c4e873d45e3-metrics-tls\") pod \"ingress-operator-5b745b69d9-m5j8f\" (UID: \"1bb8d75c-15d1-495d-a2ec-4c4e873d45e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.051254 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/492fddc8-5b29-4b32-9b4c-9831317fae23-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6bxlw\" (UID: \"492fddc8-5b29-4b32-9b4c-9831317fae23\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6bxlw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.051282 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f8d08a66-aeee-4eba-8436-4b124c45051a-etcd-client\") pod \"etcd-operator-b45778765-bgcv6\" (UID: \"f8d08a66-aeee-4eba-8436-4b124c45051a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.051300 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98f852c4-a74a-4153-a095-136d3ef7d5c2-srv-cert\") pod \"catalog-operator-68c6474976-q4k7p\" (UID: \"98f852c4-a74a-4153-a095-136d3ef7d5c2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.051324 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cff2bdad-4c6e-44bc-977a-376e09638df1-bound-sa-token\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.051343 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8d08a66-aeee-4eba-8436-4b124c45051a-etcd-service-ca\") pod \"etcd-operator-b45778765-bgcv6\" (UID: \"f8d08a66-aeee-4eba-8436-4b124c45051a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.051365 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3764a402-d82c-498f-91ab-91f657d352c6-service-ca-bundle\") pod \"router-default-5444994796-2zzvw\" (UID: \"3764a402-d82c-498f-91ab-91f657d352c6\") " pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.051409 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f2aeffca-8872-4e12-8651-4fa2fe16be8e-socket-dir\") pod \"csi-hostpathplugin-gs5md\" (UID: \"f2aeffca-8872-4e12-8651-4fa2fe16be8e\") " pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.052345 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8d08a66-aeee-4eba-8436-4b124c45051a-etcd-service-ca\") pod \"etcd-operator-b45778765-bgcv6\" (UID: \"f8d08a66-aeee-4eba-8436-4b124c45051a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.053318 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cff2bdad-4c6e-44bc-977a-376e09638df1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.053945 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cff2bdad-4c6e-44bc-977a-376e09638df1-registry-certificates\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.055154 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98f852c4-a74a-4153-a095-136d3ef7d5c2-srv-cert\") pod \"catalog-operator-68c6474976-q4k7p\" (UID: \"98f852c4-a74a-4153-a095-136d3ef7d5c2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.056409 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cff2bdad-4c6e-44bc-977a-376e09638df1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.056836 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/77070268-41bc-4f03-bddd-5470d23e03b8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4pn7q\" (UID: \"77070268-41bc-4f03-bddd-5470d23e03b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pn7q" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.057227 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b3074a3-1e32-4024-bcd1-0a7365f9b92a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4t6ks\" (UID: \"8b3074a3-1e32-4024-bcd1-0a7365f9b92a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4t6ks" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.057355 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f8d08a66-aeee-4eba-8436-4b124c45051a-etcd-client\") pod \"etcd-operator-b45778765-bgcv6\" (UID: \"f8d08a66-aeee-4eba-8436-4b124c45051a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.057412 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1bb8d75c-15d1-495d-a2ec-4c4e873d45e3-metrics-tls\") pod \"ingress-operator-5b745b69d9-m5j8f\" (UID: \"1bb8d75c-15d1-495d-a2ec-4c4e873d45e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.059459 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cff2bdad-4c6e-44bc-977a-376e09638df1-registry-tls\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.063734 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/492fddc8-5b29-4b32-9b4c-9831317fae23-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6bxlw\" (UID: \"492fddc8-5b29-4b32-9b4c-9831317fae23\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6bxlw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.070696 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nzpq\" (UniqueName: \"kubernetes.io/projected/8b3074a3-1e32-4024-bcd1-0a7365f9b92a-kube-api-access-5nzpq\") pod \"kube-storage-version-migrator-operator-b67b599dd-4t6ks\" (UID: \"8b3074a3-1e32-4024-bcd1-0a7365f9b92a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4t6ks" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.079808 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69"] Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.083064 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8d08a66-aeee-4eba-8436-4b124c45051a-serving-cert\") pod \"etcd-operator-b45778765-bgcv6\" (UID: \"f8d08a66-aeee-4eba-8436-4b124c45051a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.104277 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/98f852c4-a74a-4153-a095-136d3ef7d5c2-profile-collector-cert\") pod \"catalog-operator-68c6474976-q4k7p\" (UID: \"98f852c4-a74a-4153-a095-136d3ef7d5c2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.123774 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsq5r\" (UniqueName: \"kubernetes.io/projected/98f852c4-a74a-4153-a095-136d3ef7d5c2-kube-api-access-tsq5r\") pod \"catalog-operator-68c6474976-q4k7p\" (UID: \"98f852c4-a74a-4153-a095-136d3ef7d5c2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.127919 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1bb8d75c-15d1-495d-a2ec-4c4e873d45e3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m5j8f\" (UID: \"1bb8d75c-15d1-495d-a2ec-4c4e873d45e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.173364 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.173770 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ctws\" (UniqueName: \"kubernetes.io/projected/ed1ad681-31ef-4081-9b80-acae8a1e58ac-kube-api-access-2ctws\") pod \"machine-config-controller-84d6567774-brwlx\" (UID: \"ed1ad681-31ef-4081-9b80-acae8a1e58ac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brwlx" Jan 30 21:45:52 crc kubenswrapper[4869]: E0130 21:45:52.173802 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:52.673766004 +0000 UTC m=+153.559524029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.174025 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9e62b6-dc56-4339-9dd7-0d71b8df4053-config-volume\") pod \"dns-default-nrn8v\" (UID: \"3f9e62b6-dc56-4339-9dd7-0d71b8df4053\") " pod="openshift-dns/dns-default-nrn8v" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.174137 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3764a402-d82c-498f-91ab-91f657d352c6-metrics-certs\") pod \"router-default-5444994796-2zzvw\" (UID: \"3764a402-d82c-498f-91ab-91f657d352c6\") " pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.174229 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f2aeffca-8872-4e12-8651-4fa2fe16be8e-plugins-dir\") pod \"csi-hostpathplugin-gs5md\" (UID: \"f2aeffca-8872-4e12-8651-4fa2fe16be8e\") " pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.174340 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43f6cf85-1016-4b61-b95a-9ad7c66bb29f-secret-volume\") pod \"collect-profiles-29496825-lf7bw\" (UID: \"43f6cf85-1016-4b61-b95a-9ad7c66bb29f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.174432 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b8569fbd-38cb-48af-ae0f-44e0e5df35ce-node-bootstrap-token\") pod \"machine-config-server-txphr\" (UID: \"b8569fbd-38cb-48af-ae0f-44e0e5df35ce\") " pod="openshift-machine-config-operator/machine-config-server-txphr" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.174534 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zf5j\" (UniqueName: \"kubernetes.io/projected/9b85629c-9742-4a56-b91a-601938afd139-kube-api-access-2zf5j\") pod \"ingress-canary-mfmhf\" (UID: \"9b85629c-9742-4a56-b91a-601938afd139\") " pod="openshift-ingress-canary/ingress-canary-mfmhf" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.174631 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/636fcb63-d2ed-4d3a-84c8-caf785e27f12-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kkgzt\" (UID: \"636fcb63-d2ed-4d3a-84c8-caf785e27f12\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kkgzt" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.174725 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/215f240b-da02-40a5-b1f4-c5cfe17407b6-signing-cabundle\") pod \"service-ca-9c57cc56f-v5sqv\" (UID: \"215f240b-da02-40a5-b1f4-c5cfe17407b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-v5sqv" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.174819 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s9ht\" (UniqueName: \"kubernetes.io/projected/3f9e62b6-dc56-4339-9dd7-0d71b8df4053-kube-api-access-7s9ht\") pod \"dns-default-nrn8v\" (UID: \"3f9e62b6-dc56-4339-9dd7-0d71b8df4053\") " pod="openshift-dns/dns-default-nrn8v" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.174926 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3764a402-d82c-498f-91ab-91f657d352c6-default-certificate\") pod \"router-default-5444994796-2zzvw\" (UID: \"3764a402-d82c-498f-91ab-91f657d352c6\") " pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.175038 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f2aeffca-8872-4e12-8651-4fa2fe16be8e-registration-dir\") pod \"csi-hostpathplugin-gs5md\" (UID: \"f2aeffca-8872-4e12-8651-4fa2fe16be8e\") " pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.175176 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed1ad681-31ef-4081-9b80-acae8a1e58ac-proxy-tls\") pod \"machine-config-controller-84d6567774-brwlx\" (UID: \"ed1ad681-31ef-4081-9b80-acae8a1e58ac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brwlx" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.175277 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99023de1-0d21-49eb-b133-1403f9224808-config\") pod \"kube-apiserver-operator-766d6c64bb-m95d6\" (UID: \"99023de1-0d21-49eb-b133-1403f9224808\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m95d6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.175381 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2kxz\" (UniqueName: \"kubernetes.io/projected/215f240b-da02-40a5-b1f4-c5cfe17407b6-kube-api-access-t2kxz\") pod \"service-ca-9c57cc56f-v5sqv\" (UID: \"215f240b-da02-40a5-b1f4-c5cfe17407b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-v5sqv" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.175500 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tlts\" (UniqueName: \"kubernetes.io/projected/43f6cf85-1016-4b61-b95a-9ad7c66bb29f-kube-api-access-6tlts\") pod \"collect-profiles-29496825-lf7bw\" (UID: \"43f6cf85-1016-4b61-b95a-9ad7c66bb29f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.175624 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3764a402-d82c-498f-91ab-91f657d352c6-stats-auth\") pod \"router-default-5444994796-2zzvw\" (UID: \"3764a402-d82c-498f-91ab-91f657d352c6\") " pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.175733 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71313c5e-2597-4c09-89c3-02a17afaeda5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5dlg7\" (UID: \"71313c5e-2597-4c09-89c3-02a17afaeda5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5dlg7" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.175849 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/215f240b-da02-40a5-b1f4-c5cfe17407b6-signing-key\") pod \"service-ca-9c57cc56f-v5sqv\" (UID: \"215f240b-da02-40a5-b1f4-c5cfe17407b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-v5sqv" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.175969 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99023de1-0d21-49eb-b133-1403f9224808-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-m95d6\" (UID: \"99023de1-0d21-49eb-b133-1403f9224808\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m95d6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.176252 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/636fcb63-d2ed-4d3a-84c8-caf785e27f12-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kkgzt\" (UID: \"636fcb63-d2ed-4d3a-84c8-caf785e27f12\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kkgzt" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.176365 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fca2992a-2cb5-4b86-9cfe-66d8dae76acb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b86ql\" (UID: \"fca2992a-2cb5-4b86-9cfe-66d8dae76acb\") " pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.176474 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71313c5e-2597-4c09-89c3-02a17afaeda5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5dlg7\" (UID: \"71313c5e-2597-4c09-89c3-02a17afaeda5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5dlg7" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.176565 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9e62b6-dc56-4339-9dd7-0d71b8df4053-config-volume\") pod \"dns-default-nrn8v\" (UID: \"3f9e62b6-dc56-4339-9dd7-0d71b8df4053\") " pod="openshift-dns/dns-default-nrn8v" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.176611 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/215f240b-da02-40a5-b1f4-c5cfe17407b6-signing-cabundle\") pod \"service-ca-9c57cc56f-v5sqv\" (UID: \"215f240b-da02-40a5-b1f4-c5cfe17407b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-v5sqv" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.177614 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f2aeffca-8872-4e12-8651-4fa2fe16be8e-plugins-dir\") pod \"csi-hostpathplugin-gs5md\" (UID: \"f2aeffca-8872-4e12-8651-4fa2fe16be8e\") " pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.177749 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43f6cf85-1016-4b61-b95a-9ad7c66bb29f-config-volume\") pod \"collect-profiles-29496825-lf7bw\" (UID: \"43f6cf85-1016-4b61-b95a-9ad7c66bb29f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.178434 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99023de1-0d21-49eb-b133-1403f9224808-config\") pod \"kube-apiserver-operator-766d6c64bb-m95d6\" (UID: \"99023de1-0d21-49eb-b133-1403f9224808\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m95d6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.178543 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f2aeffca-8872-4e12-8651-4fa2fe16be8e-registration-dir\") pod \"csi-hostpathplugin-gs5md\" (UID: \"f2aeffca-8872-4e12-8651-4fa2fe16be8e\") " pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.176590 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43f6cf85-1016-4b61-b95a-9ad7c66bb29f-config-volume\") pod \"collect-profiles-29496825-lf7bw\" (UID: \"43f6cf85-1016-4b61-b95a-9ad7c66bb29f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.179532 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fca2992a-2cb5-4b86-9cfe-66d8dae76acb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b86ql\" (UID: \"fca2992a-2cb5-4b86-9cfe-66d8dae76acb\") " pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.179588 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bh5m\" (UniqueName: \"kubernetes.io/projected/3764a402-d82c-498f-91ab-91f657d352c6-kube-api-access-7bh5m\") pod \"router-default-5444994796-2zzvw\" (UID: \"3764a402-d82c-498f-91ab-91f657d352c6\") " pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.179650 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7f2v\" (UniqueName: \"kubernetes.io/projected/fca2992a-2cb5-4b86-9cfe-66d8dae76acb-kube-api-access-p7f2v\") pod \"marketplace-operator-79b997595-b86ql\" (UID: \"fca2992a-2cb5-4b86-9cfe-66d8dae76acb\") " pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.179693 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3f9e62b6-dc56-4339-9dd7-0d71b8df4053-metrics-tls\") pod \"dns-default-nrn8v\" (UID: \"3f9e62b6-dc56-4339-9dd7-0d71b8df4053\") " pod="openshift-dns/dns-default-nrn8v" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.179731 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b85629c-9742-4a56-b91a-601938afd139-cert\") pod \"ingress-canary-mfmhf\" (UID: \"9b85629c-9742-4a56-b91a-601938afd139\") " pod="openshift-ingress-canary/ingress-canary-mfmhf" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.179765 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f2aeffca-8872-4e12-8651-4fa2fe16be8e-mountpoint-dir\") pod \"csi-hostpathplugin-gs5md\" (UID: \"f2aeffca-8872-4e12-8651-4fa2fe16be8e\") " pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.179818 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3764a402-d82c-498f-91ab-91f657d352c6-service-ca-bundle\") pod \"router-default-5444994796-2zzvw\" (UID: \"3764a402-d82c-498f-91ab-91f657d352c6\") " pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.179842 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f2aeffca-8872-4e12-8651-4fa2fe16be8e-socket-dir\") pod \"csi-hostpathplugin-gs5md\" (UID: \"f2aeffca-8872-4e12-8651-4fa2fe16be8e\") " pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.179874 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99023de1-0d21-49eb-b133-1403f9224808-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-m95d6\" (UID: \"99023de1-0d21-49eb-b133-1403f9224808\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m95d6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.179943 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71313c5e-2597-4c09-89c3-02a17afaeda5-config\") pod \"kube-controller-manager-operator-78b949d7b-5dlg7\" (UID: \"71313c5e-2597-4c09-89c3-02a17afaeda5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5dlg7" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.179980 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.180074 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f2aeffca-8872-4e12-8651-4fa2fe16be8e-csi-data-dir\") pod \"csi-hostpathplugin-gs5md\" (UID: \"f2aeffca-8872-4e12-8651-4fa2fe16be8e\") " pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.180104 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7tl2\" (UniqueName: \"kubernetes.io/projected/f2aeffca-8872-4e12-8651-4fa2fe16be8e-kube-api-access-b7tl2\") pod \"csi-hostpathplugin-gs5md\" (UID: \"f2aeffca-8872-4e12-8651-4fa2fe16be8e\") " pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.180144 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b8569fbd-38cb-48af-ae0f-44e0e5df35ce-certs\") pod \"machine-config-server-txphr\" (UID: \"b8569fbd-38cb-48af-ae0f-44e0e5df35ce\") " pod="openshift-machine-config-operator/machine-config-server-txphr" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.180166 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmvfs\" (UniqueName: \"kubernetes.io/projected/b8569fbd-38cb-48af-ae0f-44e0e5df35ce-kube-api-access-mmvfs\") pod \"machine-config-server-txphr\" (UID: \"b8569fbd-38cb-48af-ae0f-44e0e5df35ce\") " pod="openshift-machine-config-operator/machine-config-server-txphr" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.180198 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/636fcb63-d2ed-4d3a-84c8-caf785e27f12-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kkgzt\" (UID: \"636fcb63-d2ed-4d3a-84c8-caf785e27f12\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kkgzt" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.180230 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed1ad681-31ef-4081-9b80-acae8a1e58ac-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-brwlx\" (UID: \"ed1ad681-31ef-4081-9b80-acae8a1e58ac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brwlx" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.182238 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed1ad681-31ef-4081-9b80-acae8a1e58ac-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-brwlx\" (UID: \"ed1ad681-31ef-4081-9b80-acae8a1e58ac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brwlx" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.182326 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f2aeffca-8872-4e12-8651-4fa2fe16be8e-mountpoint-dir\") pod \"csi-hostpathplugin-gs5md\" (UID: \"f2aeffca-8872-4e12-8651-4fa2fe16be8e\") " pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.182759 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fca2992a-2cb5-4b86-9cfe-66d8dae76acb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b86ql\" (UID: \"fca2992a-2cb5-4b86-9cfe-66d8dae76acb\") " pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.183030 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3764a402-d82c-498f-91ab-91f657d352c6-service-ca-bundle\") pod \"router-default-5444994796-2zzvw\" (UID: \"3764a402-d82c-498f-91ab-91f657d352c6\") " pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.183121 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f2aeffca-8872-4e12-8651-4fa2fe16be8e-socket-dir\") pod \"csi-hostpathplugin-gs5md\" (UID: \"f2aeffca-8872-4e12-8651-4fa2fe16be8e\") " pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.183756 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f2aeffca-8872-4e12-8651-4fa2fe16be8e-csi-data-dir\") pod \"csi-hostpathplugin-gs5md\" (UID: \"f2aeffca-8872-4e12-8651-4fa2fe16be8e\") " pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.184114 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3764a402-d82c-498f-91ab-91f657d352c6-default-certificate\") pod \"router-default-5444994796-2zzvw\" (UID: \"3764a402-d82c-498f-91ab-91f657d352c6\") " pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.184831 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed1ad681-31ef-4081-9b80-acae8a1e58ac-proxy-tls\") pod \"machine-config-controller-84d6567774-brwlx\" (UID: \"ed1ad681-31ef-4081-9b80-acae8a1e58ac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brwlx" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.186652 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71313c5e-2597-4c09-89c3-02a17afaeda5-config\") pod \"kube-controller-manager-operator-78b949d7b-5dlg7\" (UID: \"71313c5e-2597-4c09-89c3-02a17afaeda5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5dlg7" Jan 30 21:45:52 crc kubenswrapper[4869]: E0130 21:45:52.186774 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:52.686289818 +0000 UTC m=+153.572047843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.192532 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/636fcb63-d2ed-4d3a-84c8-caf785e27f12-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kkgzt\" (UID: \"636fcb63-d2ed-4d3a-84c8-caf785e27f12\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kkgzt" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.192616 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b85629c-9742-4a56-b91a-601938afd139-cert\") pod \"ingress-canary-mfmhf\" (UID: \"9b85629c-9742-4a56-b91a-601938afd139\") " pod="openshift-ingress-canary/ingress-canary-mfmhf" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.192848 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99023de1-0d21-49eb-b133-1403f9224808-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-m95d6\" (UID: \"99023de1-0d21-49eb-b133-1403f9224808\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m95d6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.193807 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/636fcb63-d2ed-4d3a-84c8-caf785e27f12-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kkgzt\" (UID: \"636fcb63-d2ed-4d3a-84c8-caf785e27f12\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kkgzt" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.194039 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b8569fbd-38cb-48af-ae0f-44e0e5df35ce-node-bootstrap-token\") pod \"machine-config-server-txphr\" (UID: \"b8569fbd-38cb-48af-ae0f-44e0e5df35ce\") " pod="openshift-machine-config-operator/machine-config-server-txphr" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.194291 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3764a402-d82c-498f-91ab-91f657d352c6-stats-auth\") pod \"router-default-5444994796-2zzvw\" (UID: \"3764a402-d82c-498f-91ab-91f657d352c6\") " pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.194588 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h85h\" (UniqueName: \"kubernetes.io/projected/77070268-41bc-4f03-bddd-5470d23e03b8-kube-api-access-6h85h\") pod \"package-server-manager-789f6589d5-4pn7q\" (UID: \"77070268-41bc-4f03-bddd-5470d23e03b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pn7q" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.201197 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3764a402-d82c-498f-91ab-91f657d352c6-metrics-certs\") pod \"router-default-5444994796-2zzvw\" (UID: \"3764a402-d82c-498f-91ab-91f657d352c6\") " pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.201385 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b8569fbd-38cb-48af-ae0f-44e0e5df35ce-certs\") pod \"machine-config-server-txphr\" (UID: \"b8569fbd-38cb-48af-ae0f-44e0e5df35ce\") " pod="openshift-machine-config-operator/machine-config-server-txphr" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.202501 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3f9e62b6-dc56-4339-9dd7-0d71b8df4053-metrics-tls\") pod \"dns-default-nrn8v\" (UID: \"3f9e62b6-dc56-4339-9dd7-0d71b8df4053\") " pod="openshift-dns/dns-default-nrn8v" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.203005 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43f6cf85-1016-4b61-b95a-9ad7c66bb29f-secret-volume\") pod \"collect-profiles-29496825-lf7bw\" (UID: \"43f6cf85-1016-4b61-b95a-9ad7c66bb29f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.203773 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71313c5e-2597-4c09-89c3-02a17afaeda5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5dlg7\" (UID: \"71313c5e-2597-4c09-89c3-02a17afaeda5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5dlg7" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.204617 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtlhv\" (UniqueName: \"kubernetes.io/projected/cff2bdad-4c6e-44bc-977a-376e09638df1-kube-api-access-wtlhv\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.205993 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/215f240b-da02-40a5-b1f4-c5cfe17407b6-signing-key\") pod \"service-ca-9c57cc56f-v5sqv\" (UID: \"215f240b-da02-40a5-b1f4-c5cfe17407b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-v5sqv" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.207502 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fca2992a-2cb5-4b86-9cfe-66d8dae76acb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b86ql\" (UID: \"fca2992a-2cb5-4b86-9cfe-66d8dae76acb\") " pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.209750 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wjjv\" (UniqueName: \"kubernetes.io/projected/f8d08a66-aeee-4eba-8436-4b124c45051a-kube-api-access-5wjjv\") pod \"etcd-operator-b45778765-bgcv6\" (UID: \"f8d08a66-aeee-4eba-8436-4b124c45051a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.210454 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkb7s\" (UniqueName: \"kubernetes.io/projected/492fddc8-5b29-4b32-9b4c-9831317fae23-kube-api-access-wkb7s\") pod \"control-plane-machine-set-operator-78cbb6b69f-6bxlw\" (UID: \"492fddc8-5b29-4b32-9b4c-9831317fae23\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6bxlw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.230475 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvv6h\" (UniqueName: \"kubernetes.io/projected/1bb8d75c-15d1-495d-a2ec-4c4e873d45e3-kube-api-access-cvv6h\") pod \"ingress-operator-5b745b69d9-m5j8f\" (UID: \"1bb8d75c-15d1-495d-a2ec-4c4e873d45e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.254502 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cff2bdad-4c6e-44bc-977a-376e09638df1-bound-sa-token\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.280802 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:52 crc kubenswrapper[4869]: E0130 21:45:52.281238 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:52.781209579 +0000 UTC m=+153.666967604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.293599 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ctws\" (UniqueName: \"kubernetes.io/projected/ed1ad681-31ef-4081-9b80-acae8a1e58ac-kube-api-access-2ctws\") pod \"machine-config-controller-84d6567774-brwlx\" (UID: \"ed1ad681-31ef-4081-9b80-acae8a1e58ac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brwlx" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.305132 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.322952 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/636fcb63-d2ed-4d3a-84c8-caf785e27f12-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kkgzt\" (UID: \"636fcb63-d2ed-4d3a-84c8-caf785e27f12\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kkgzt" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.328404 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.329169 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zf5j\" (UniqueName: \"kubernetes.io/projected/9b85629c-9742-4a56-b91a-601938afd139-kube-api-access-2zf5j\") pod \"ingress-canary-mfmhf\" (UID: \"9b85629c-9742-4a56-b91a-601938afd139\") " pod="openshift-ingress-canary/ingress-canary-mfmhf" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.335874 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4t6ks" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.361706 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s9ht\" (UniqueName: \"kubernetes.io/projected/3f9e62b6-dc56-4339-9dd7-0d71b8df4053-kube-api-access-7s9ht\") pod \"dns-default-nrn8v\" (UID: \"3f9e62b6-dc56-4339-9dd7-0d71b8df4053\") " pod="openshift-dns/dns-default-nrn8v" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.375200 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71313c5e-2597-4c09-89c3-02a17afaeda5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5dlg7\" (UID: \"71313c5e-2597-4c09-89c3-02a17afaeda5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5dlg7" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.375544 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.382618 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: E0130 21:45:52.383030 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:52.883016708 +0000 UTC m=+153.768774733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.383889 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pn7q" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.404904 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kkgzt" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.413292 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tlts\" (UniqueName: \"kubernetes.io/projected/43f6cf85-1016-4b61-b95a-9ad7c66bb29f-kube-api-access-6tlts\") pod \"collect-profiles-29496825-lf7bw\" (UID: \"43f6cf85-1016-4b61-b95a-9ad7c66bb29f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.416250 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6bxlw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.417230 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2kxz\" (UniqueName: \"kubernetes.io/projected/215f240b-da02-40a5-b1f4-c5cfe17407b6-kube-api-access-t2kxz\") pod \"service-ca-9c57cc56f-v5sqv\" (UID: \"215f240b-da02-40a5-b1f4-c5cfe17407b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-v5sqv" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.432559 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brwlx" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.440793 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bh5m\" (UniqueName: \"kubernetes.io/projected/3764a402-d82c-498f-91ab-91f657d352c6-kube-api-access-7bh5m\") pod \"router-default-5444994796-2zzvw\" (UID: \"3764a402-d82c-498f-91ab-91f657d352c6\") " pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.447256 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.453604 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.457655 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99023de1-0d21-49eb-b133-1403f9224808-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-m95d6\" (UID: \"99023de1-0d21-49eb-b133-1403f9224808\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m95d6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.459478 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mfmhf" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.462167 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n"] Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.467451 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nrn8v" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.470355 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7tl2\" (UniqueName: \"kubernetes.io/projected/f2aeffca-8872-4e12-8651-4fa2fe16be8e-kube-api-access-b7tl2\") pod \"csi-hostpathplugin-gs5md\" (UID: \"f2aeffca-8872-4e12-8651-4fa2fe16be8e\") " pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.485692 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:52 crc kubenswrapper[4869]: E0130 21:45:52.485907 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:52.985868918 +0000 UTC m=+153.871626943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.485992 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: E0130 21:45:52.486306 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:52.986297682 +0000 UTC m=+153.872055707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.492961 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmvfs\" (UniqueName: \"kubernetes.io/projected/b8569fbd-38cb-48af-ae0f-44e0e5df35ce-kube-api-access-mmvfs\") pod \"machine-config-server-txphr\" (UID: \"b8569fbd-38cb-48af-ae0f-44e0e5df35ce\") " pod="openshift-machine-config-operator/machine-config-server-txphr" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.494551 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gs5md" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.516859 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7f2v\" (UniqueName: \"kubernetes.io/projected/fca2992a-2cb5-4b86-9cfe-66d8dae76acb-kube-api-access-p7f2v\") pod \"marketplace-operator-79b997595-b86ql\" (UID: \"fca2992a-2cb5-4b86-9cfe-66d8dae76acb\") " pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.557690 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4p68r"] Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.564885 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-jqxjt"] Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.578929 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz"] Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.608342 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:52 crc kubenswrapper[4869]: E0130 21:45:52.608950 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:53.108921826 +0000 UTC m=+153.994679861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.628315 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5dlg7" Jan 30 21:45:52 crc kubenswrapper[4869]: W0130 21:45:52.665632 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c24fb1_6b9c_4688_bb3e_6bc97fce9856.slice/crio-58d761f408300824efd753f1f0a2e3ffe70da4b4848ba930ae8515d6ea7a580b WatchSource:0}: Error finding container 58d761f408300824efd753f1f0a2e3ffe70da4b4848ba930ae8515d6ea7a580b: Status 404 returned error can't find the container with id 58d761f408300824efd753f1f0a2e3ffe70da4b4848ba930ae8515d6ea7a580b Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.666527 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-v5sqv" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.673711 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8brpl" event={"ID":"c7584681-881d-465c-b4e8-121404518807","Type":"ContainerStarted","Data":"449f0e97d79b7f2aa37ace73dd37439b18789d3defa371db7575fbcca559a0a2"} Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.673760 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8brpl" event={"ID":"c7584681-881d-465c-b4e8-121404518807","Type":"ContainerStarted","Data":"7c2c82aa84bdb466a67cbf092347fd7fec21c94db9df9f79d4afcc1b69405096"} Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.674732 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8brpl" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.676246 4869 patch_prober.go:28] interesting pod/console-operator-58897d9998-8brpl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.676337 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8brpl" podUID="c7584681-881d-465c-b4e8-121404518807" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.677329 4869 generic.go:334] "Generic (PLEG): container finished" podID="8922ebf0-c10f-4963-b517-e51ba2284e99" containerID="05dbcc72a83fadc120d85758a4abbee3d3d032def1aa9b1ee18adbca3e792d7e" exitCode=0 Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.677700 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4kznw" event={"ID":"8922ebf0-c10f-4963-b517-e51ba2284e99","Type":"ContainerDied","Data":"05dbcc72a83fadc120d85758a4abbee3d3d032def1aa9b1ee18adbca3e792d7e"} Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.684775 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7qhjt" event={"ID":"901784d6-cd46-4181-a06f-f88c49faac0e","Type":"ContainerStarted","Data":"2aece860e10795701c3875e7301359f7f2369416de9d981cb4709ae99b2e0f6b"} Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.684816 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7qhjt" event={"ID":"901784d6-cd46-4181-a06f-f88c49faac0e","Type":"ContainerStarted","Data":"973e754faa0464ab37dbc29ece1390679d73ca2e6e7409deb2a604402fc3c179"} Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.685989 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" event={"ID":"98f93152-3943-4bb2-ac4b-d2d79286e19d","Type":"ContainerStarted","Data":"1d32527709af78cb772ceaf181794286551128d9c3b5fe3f773a71cc1d25f0ce"} Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.688173 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" event={"ID":"6d91a880-3f38-492b-b797-6fc24c2da65e","Type":"ContainerStarted","Data":"d4eb62eabccd8460d3c29bada257b11500618065d637554ea24bfb2162e80e57"} Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.689990 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-npnfg" event={"ID":"c1f0b262-4d72-49a2-aa45-918fbc89a9f2","Type":"ContainerStarted","Data":"f2225e91f65a703759609b41d0a89d2a257d61eb150bf0733e3651f4a4edce3f"} Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.690011 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-npnfg" event={"ID":"c1f0b262-4d72-49a2-aa45-918fbc89a9f2","Type":"ContainerStarted","Data":"0d1b7bccbf4c5d5dec67c3bc36ec846e9002684c76759d9846a061ffa837ace0"} Jan 30 21:45:52 crc kubenswrapper[4869]: W0130 21:45:52.690668 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod502a480e_94d6_425b_923c_ef29c26c09a2.slice/crio-daa2896f741b6e4cd5c437184bff481a3d4cc382695134dd886521cff3689086 WatchSource:0}: Error finding container daa2896f741b6e4cd5c437184bff481a3d4cc382695134dd886521cff3689086: Status 404 returned error can't find the container with id daa2896f741b6e4cd5c437184bff481a3d4cc382695134dd886521cff3689086 Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.690696 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.692200 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mq7xx" event={"ID":"c31c9863-59b6-490a-95d1-53d4a9707117","Type":"ContainerStarted","Data":"9ef6f5c75e0a43f997b7c9bc8fa82a311bfe5c341873ee14b1089473a6160ef3"} Jan 30 21:45:52 crc kubenswrapper[4869]: W0130 21:45:52.695397 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d54f896_380b_4253_bcba_8576673c606e.slice/crio-b4e90afd39b77f5f2e033d8d48b1c6df64ea17821c84943b720f83eb7999d4c9 WatchSource:0}: Error finding container b4e90afd39b77f5f2e033d8d48b1c6df64ea17821c84943b720f83eb7999d4c9: Status 404 returned error can't find the container with id b4e90afd39b77f5f2e033d8d48b1c6df64ea17821c84943b720f83eb7999d4c9 Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.696072 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k4rff" event={"ID":"49697d03-35f3-4fa3-9141-2bb8ae8eccab","Type":"ContainerStarted","Data":"4c31c834e14b23a0ac9371aa8bb64d2bec25d495f59a4a24b592cc5b52ba77e1"} Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.696122 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k4rff" event={"ID":"49697d03-35f3-4fa3-9141-2bb8ae8eccab","Type":"ContainerStarted","Data":"45ef9a833a99975f3cbf0884fe48886a657e095798e2b1758adb354707782d79"} Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.696689 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-k4rff" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.699057 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-k4rff container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.699666 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k4rff" podUID="49697d03-35f3-4fa3-9141-2bb8ae8eccab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.702750 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m" event={"ID":"f1da8dd6-5d73-4fc5-9e46-40c77930b2da","Type":"ContainerStarted","Data":"63066e6ec37d8fdcba3e3c4e1aea9d8d36b7915ad9a4abe4c31236147869cf9c"} Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.702801 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m" event={"ID":"f1da8dd6-5d73-4fc5-9e46-40c77930b2da","Type":"ContainerStarted","Data":"b6d78228b8ea286fa91088ad5ecbffa0ea26379194b603beac7cca6f01e7bfb2"} Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.711514 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: E0130 21:45:52.713091 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:53.213075268 +0000 UTC m=+154.098833393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.719780 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69" event={"ID":"76a78a93-c264-4c12-b89a-b265e6731c7e","Type":"ContainerStarted","Data":"fa037f4fe79df29223db690161a79c3b4eb237f2b1d993c6f2b2533d693df722"} Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.719938 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69" event={"ID":"76a78a93-c264-4c12-b89a-b265e6731c7e","Type":"ContainerStarted","Data":"b2dc5a79f6de2bee683514225bc6708527791d534b55646d0b71ffb7242cb24e"} Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.740122 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m95d6" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.773944 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-txphr" Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.801085 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x" event={"ID":"d8bca2a0-12fd-48ab-9507-ca0824a394cb","Type":"ContainerStarted","Data":"2eb279b2be611b20b6183ca435d0779992d7322166b0e94ce2e90ecf0d59b81b"} Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.812949 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:52 crc kubenswrapper[4869]: E0130 21:45:52.814611 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:53.314590508 +0000 UTC m=+154.200348523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.819589 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kgmf9" event={"ID":"400bd72e-e645-473e-b285-497d96f567ac","Type":"ContainerStarted","Data":"1c45ac7cf518db9d395cbfa46bd00b72384a6092573830b7520ac6ef26e35806"} Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.819643 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kgmf9" event={"ID":"400bd72e-e645-473e-b285-497d96f567ac","Type":"ContainerStarted","Data":"9db05dde20baeb8ab45692ff2477da3d311ad402e328fcfc9bd77a4e5bd069d2"} Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.832886 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v5kfz"] Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.862590 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hgz5t"] Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.864476 4869 generic.go:334] "Generic (PLEG): container finished" podID="1f418006-5265-449a-8e91-64144e311a6b" containerID="0f1c9547aadf26c10f1b996757f2d7a0751fe307144685037f16d3d9801cf2de" exitCode=0 Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.865952 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv" event={"ID":"1f418006-5265-449a-8e91-64144e311a6b","Type":"ContainerDied","Data":"0f1c9547aadf26c10f1b996757f2d7a0751fe307144685037f16d3d9801cf2de"} Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.901916 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh"] Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.920063 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:52 crc kubenswrapper[4869]: E0130 21:45:52.921059 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:53.421043222 +0000 UTC m=+154.306801247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:52 crc kubenswrapper[4869]: I0130 21:45:52.944709 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6jqjl" podStartSLOduration=132.944676686 podStartE2EDuration="2m12.944676686s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:52.941596199 +0000 UTC m=+153.827354224" watchObservedRunningTime="2026-01-30 21:45:52.944676686 +0000 UTC m=+153.830434731" Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.021232 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:53 crc kubenswrapper[4869]: E0130 21:45:53.022170 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:53.522141907 +0000 UTC m=+154.407899942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.125437 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:53 crc kubenswrapper[4869]: E0130 21:45:53.125917 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:53.625880606 +0000 UTC m=+154.511638631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.191077 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-8brpl" podStartSLOduration=133.1910495 podStartE2EDuration="2m13.1910495s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:53.147513939 +0000 UTC m=+154.033271974" watchObservedRunningTime="2026-01-30 21:45:53.1910495 +0000 UTC m=+154.076807525" Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.227646 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:53 crc kubenswrapper[4869]: E0130 21:45:53.228065 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:53.728044836 +0000 UTC m=+154.613802861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.230490 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-k4rff" podStartSLOduration=133.230470942 podStartE2EDuration="2m13.230470942s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:53.228416918 +0000 UTC m=+154.114174933" watchObservedRunningTime="2026-01-30 21:45:53.230470942 +0000 UTC m=+154.116228967" Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.328852 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:53 crc kubenswrapper[4869]: E0130 21:45:53.329731 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:53.82971309 +0000 UTC m=+154.715471125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.368436 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-px2s8"] Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.373861 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fgmnt"] Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.379411 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bgcv6"] Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.389971 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-mq7xx" podStartSLOduration=133.389939828 podStartE2EDuration="2m13.389939828s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:53.385204589 +0000 UTC m=+154.270962624" watchObservedRunningTime="2026-01-30 21:45:53.389939828 +0000 UTC m=+154.275697863" Jan 30 21:45:53 crc kubenswrapper[4869]: W0130 21:45:53.412451 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7c790ef_ae52_4809_b6f2_088811793867.slice/crio-81d5aaa937f32ae2eeb2cddcf651a58af14e88752d8f0a54dd9218a6d3b3b6d3 WatchSource:0}: Error finding container 81d5aaa937f32ae2eeb2cddcf651a58af14e88752d8f0a54dd9218a6d3b3b6d3: Status 404 returned error can't find the container with id 81d5aaa937f32ae2eeb2cddcf651a58af14e88752d8f0a54dd9218a6d3b3b6d3 Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.430441 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:53 crc kubenswrapper[4869]: E0130 21:45:53.430556 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:53.930530168 +0000 UTC m=+154.816288193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.430631 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:53 crc kubenswrapper[4869]: E0130 21:45:53.431104 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:53.931084044 +0000 UTC m=+154.816842069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.471539 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f"] Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.484957 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkj8m" podStartSLOduration=133.484933642 podStartE2EDuration="2m13.484933642s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:53.484501758 +0000 UTC m=+154.370259803" watchObservedRunningTime="2026-01-30 21:45:53.484933642 +0000 UTC m=+154.370691677" Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.531283 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:53 crc kubenswrapper[4869]: E0130 21:45:53.531465 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:54.031424957 +0000 UTC m=+154.917182982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.531598 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:53 crc kubenswrapper[4869]: E0130 21:45:53.531947 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:54.031934192 +0000 UTC m=+154.917692207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.567695 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kkgzt"] Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.614368 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw"] Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.622574 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pn7q"] Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.626650 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-v5sqv"] Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.632322 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:53 crc kubenswrapper[4869]: E0130 21:45:53.632584 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:54.132525492 +0000 UTC m=+155.018283517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.632775 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:53 crc kubenswrapper[4869]: E0130 21:45:53.633370 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:54.133360939 +0000 UTC m=+155.019118954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.643994 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-brwlx"] Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.647513 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4t6ks"] Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.650592 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nrn8v"] Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.652071 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-565td" podStartSLOduration=133.652044438 podStartE2EDuration="2m13.652044438s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:53.588713662 +0000 UTC m=+154.474471687" watchObservedRunningTime="2026-01-30 21:45:53.652044438 +0000 UTC m=+154.537802473" Jan 30 21:45:53 crc kubenswrapper[4869]: W0130 21:45:53.673599 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded1ad681_31ef_4081_9b80_acae8a1e58ac.slice/crio-084e4cb886432b50052f073c4dab9e7bb32665a2c0599ba3940afb5bc1931c30 WatchSource:0}: Error finding container 084e4cb886432b50052f073c4dab9e7bb32665a2c0599ba3940afb5bc1931c30: Status 404 returned error can't find the container with id 084e4cb886432b50052f073c4dab9e7bb32665a2c0599ba3940afb5bc1931c30 Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.735503 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:53 crc kubenswrapper[4869]: E0130 21:45:53.735878 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:54.235854759 +0000 UTC m=+155.121612784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.737016 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:53 crc kubenswrapper[4869]: E0130 21:45:53.737442 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:54.237432609 +0000 UTC m=+155.123190634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.748148 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6bxlw"] Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.751055 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gs5md"] Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.768173 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p"] Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.772665 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m95d6"] Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.778661 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5dlg7"] Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.784251 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7qhjt" podStartSLOduration=133.784231023 podStartE2EDuration="2m13.784231023s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:53.782096216 +0000 UTC m=+154.667854251" watchObservedRunningTime="2026-01-30 21:45:53.784231023 +0000 UTC m=+154.669989048" Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.806650 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-mfmhf"] Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.813328 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b86ql"] Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.838811 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:53 crc kubenswrapper[4869]: E0130 21:45:53.839099 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:54.339069402 +0000 UTC m=+155.224827427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.839735 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:53 crc kubenswrapper[4869]: E0130 21:45:53.848307 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:54.348283672 +0000 UTC m=+155.234041687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.923702 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nrn8v" event={"ID":"3f9e62b6-dc56-4339-9dd7-0d71b8df4053","Type":"ContainerStarted","Data":"b3dd6aaa427ae3d15a2281b111824ffbe54e3d00f6207bcf1e6cd25913f77a27"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.927206 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" event={"ID":"f8d08a66-aeee-4eba-8436-4b124c45051a","Type":"ContainerStarted","Data":"7d67db100a1a07440adec988bfb245009b293aa1c7449c653e4f0f919dd98a00"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.934992 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-txphr" event={"ID":"b8569fbd-38cb-48af-ae0f-44e0e5df35ce","Type":"ContainerStarted","Data":"79f87875e2b0b57be28423586c5049428a1aacb039abe95ffdeda67d06d1b9d1"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.935036 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-txphr" event={"ID":"b8569fbd-38cb-48af-ae0f-44e0e5df35ce","Type":"ContainerStarted","Data":"7c9cb0913f3f407d0aedf86a715f3de32ae5bc38a4563770fb35655be31cdb36"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.943177 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hgz5t" event={"ID":"d151f3f9-526d-40a2-8021-13aa609295b1","Type":"ContainerStarted","Data":"c9549ba25c5b0153d396c94413c82a19421b05d84f913fc5a214075a1f50bea5"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.943243 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hgz5t" event={"ID":"d151f3f9-526d-40a2-8021-13aa609295b1","Type":"ContainerStarted","Data":"a1e578678c9ece45cfdfe9c3de32082188a26d0633d24f748eafb78b863fcf76"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.947031 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jqxjt" event={"ID":"5feff7d0-1eb1-42d4-8891-6758fdbcb01f","Type":"ContainerStarted","Data":"b4fac7709e1b2e164c6c4eb366bd18b7d1aa39c0a57b4364846fb69ad0065325"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.947083 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jqxjt" event={"ID":"5feff7d0-1eb1-42d4-8891-6758fdbcb01f","Type":"ContainerStarted","Data":"821da5b441cbd6b724a88ca1fe7d0a5600b05b140956d8970af79e27ec2e9916"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.948768 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f" event={"ID":"1bb8d75c-15d1-495d-a2ec-4c4e873d45e3","Type":"ContainerStarted","Data":"ebba6477922a866232fd3e721cc481072f86748a98bfd30e2d1102c2bc7c3b96"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.950076 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4p68r" event={"ID":"502a480e-94d6-425b-923c-ef29c26c09a2","Type":"ContainerStarted","Data":"4f1b37e6a83bb20ece5e8598a7b36c63a6a3523ea67a28c266315060a32e0b31"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.950124 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4p68r" event={"ID":"502a480e-94d6-425b-923c-ef29c26c09a2","Type":"ContainerStarted","Data":"daa2896f741b6e4cd5c437184bff481a3d4cc382695134dd886521cff3689086"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.953287 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" event={"ID":"8b2bd278-ed0e-4e4e-90a6-9286bc29a664","Type":"ContainerStarted","Data":"e5d3ed060c23d02953304dc995fa37acd5c66638bc04e098e21d100677cda710"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.953768 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.955064 4869 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-px2s8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.955132 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" podUID="8b2bd278-ed0e-4e4e-90a6-9286bc29a664" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.956876 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" event={"ID":"20c24fb1-6b9c-4688-bb3e-6bc97fce9856","Type":"ContainerStarted","Data":"ab41174f167e2e0a914283e6dfe1df9a3a87cfbff56339f682b5022cfedfc582"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.956925 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" event={"ID":"20c24fb1-6b9c-4688-bb3e-6bc97fce9856","Type":"ContainerStarted","Data":"58d761f408300824efd753f1f0a2e3ffe70da4b4848ba930ae8515d6ea7a580b"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.958124 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.959756 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:53 crc kubenswrapper[4869]: E0130 21:45:53.960102 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:54.460078875 +0000 UTC m=+155.345836900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.960164 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:53 crc kubenswrapper[4869]: E0130 21:45:53.960563 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:54.46055533 +0000 UTC m=+155.346313355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:53 crc kubenswrapper[4869]: W0130 21:45:53.961071 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b85629c_9742_4a56_b91a_601938afd139.slice/crio-f24b36ddba65a97f4926eb67e509c543d57dd958ded5978dede93907f54dfd8a WatchSource:0}: Error finding container f24b36ddba65a97f4926eb67e509c543d57dd958ded5978dede93907f54dfd8a: Status 404 returned error can't find the container with id f24b36ddba65a97f4926eb67e509c543d57dd958ded5978dede93907f54dfd8a Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.961608 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" event={"ID":"e7c790ef-ae52-4809-b6f2-088811793867","Type":"ContainerStarted","Data":"81d5aaa937f32ae2eeb2cddcf651a58af14e88752d8f0a54dd9218a6d3b3b6d3"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.970299 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t4q8n container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.970384 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" podUID="20c24fb1-6b9c-4688-bb3e-6bc97fce9856" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.973641 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brwlx" event={"ID":"ed1ad681-31ef-4081-9b80-acae8a1e58ac","Type":"ContainerStarted","Data":"084e4cb886432b50052f073c4dab9e7bb32665a2c0599ba3940afb5bc1931c30"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.979262 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p" event={"ID":"98f852c4-a74a-4153-a095-136d3ef7d5c2","Type":"ContainerStarted","Data":"bca8561d3db7fd2d7301a65c29d60ac7bd17a83d2d406cc202da352cd13ec024"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.983888 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4t6ks" event={"ID":"8b3074a3-1e32-4024-bcd1-0a7365f9b92a","Type":"ContainerStarted","Data":"7f47496d3c40f3714a5cf15d5c212cf8c55ab8880fabdb3de323631b9e0fe06d"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.992029 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v5kfz" event={"ID":"a8b78a73-2ae1-4eed-8110-943a6ae6fe04","Type":"ContainerStarted","Data":"aa74ab4e60093157afa9732eef22f58e5327d79ec66c89313660e02ab26939d0"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.992228 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v5kfz" event={"ID":"a8b78a73-2ae1-4eed-8110-943a6ae6fe04","Type":"ContainerStarted","Data":"1e4a72741f44cd7dadf8232efed2a9adc603be3300f06d0343b882738dfa6560"} Jan 30 21:45:53 crc kubenswrapper[4869]: I0130 21:45:53.997155 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6bxlw" event={"ID":"492fddc8-5b29-4b32-9b4c-9831317fae23","Type":"ContainerStarted","Data":"243e308384c166009fdeed686c0876b1aa4d764fbad691336f0c56673b30acbb"} Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.000319 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv" event={"ID":"1f418006-5265-449a-8e91-64144e311a6b","Type":"ContainerStarted","Data":"9f8c619ff1428e82ad4341ce0002ef5c8c543f4c8992045b83d64e5c7db1466d"} Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.000495 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.005545 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kkgzt" event={"ID":"636fcb63-d2ed-4d3a-84c8-caf785e27f12","Type":"ContainerStarted","Data":"b3d108e80c6144e50364769a446576a98dc9e1cfe6b2efab838711fbb18a9c0f"} Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.011876 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" event={"ID":"f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc","Type":"ContainerStarted","Data":"e9d260c63b3255998c164c7fc8c490012c04183ff7b9a48e35efe75363669c74"} Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.011967 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" event={"ID":"f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc","Type":"ContainerStarted","Data":"b45aaad9dd048859343201b2dbfed5697d585e632ad4548e3e024aa134b10879"} Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.012453 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.017920 4869 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9zpxh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.017991 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" podUID="f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.045813 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kgmf9" event={"ID":"400bd72e-e645-473e-b285-497d96f567ac","Type":"ContainerStarted","Data":"49316609c196c1dd4eaf3a4911d2169f8afc635d1e1caaca946207a401c934be"} Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.068290 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:54 crc kubenswrapper[4869]: E0130 21:45:54.074400 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:54.574333146 +0000 UTC m=+155.460091171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.089582 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gs5md" event={"ID":"f2aeffca-8872-4e12-8651-4fa2fe16be8e","Type":"ContainerStarted","Data":"291709d596b995bf26735e55727844ebe4ebdd42215a0503509ceb16bc8f07bd"} Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.138323 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69" event={"ID":"76a78a93-c264-4c12-b89a-b265e6731c7e","Type":"ContainerStarted","Data":"eb6fe969ba44fbec78c9f3be1f6bb96a203470bd744da6ea1998c15a2605f517"} Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.148520 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2zzvw" event={"ID":"3764a402-d82c-498f-91ab-91f657d352c6","Type":"ContainerStarted","Data":"c29c0151eee7055c62234cd2e80bf6b9304ecce6a5081784eed050e7269a7097"} Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.148595 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2zzvw" event={"ID":"3764a402-d82c-498f-91ab-91f657d352c6","Type":"ContainerStarted","Data":"454bf212db8b0d24cb03bcef063f8097c005cf9ef4f7fc7038b39eb3d500a253"} Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.174178 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-v5sqv" event={"ID":"215f240b-da02-40a5-b1f4-c5cfe17407b6","Type":"ContainerStarted","Data":"ae4c4d44d7110655bb19814c65a71c8200e603ee7ad8547e922994f560facb0a"} Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.176352 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:54 crc kubenswrapper[4869]: E0130 21:45:54.177682 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:54.677661932 +0000 UTC m=+155.563419957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.191463 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-npnfg" podStartSLOduration=134.191435726 podStartE2EDuration="2m14.191435726s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:54.190864528 +0000 UTC m=+155.076622563" watchObservedRunningTime="2026-01-30 21:45:54.191435726 +0000 UTC m=+155.077193751" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.237246 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz" event={"ID":"5d54f896-380b-4253-bcba-8576673c606e","Type":"ContainerStarted","Data":"22b5183530b22822d8ae2af81eeae4b65bccd9ace25042a57f0fb765486b1ec2"} Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.237326 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz" event={"ID":"5d54f896-380b-4253-bcba-8576673c606e","Type":"ContainerStarted","Data":"b4e90afd39b77f5f2e033d8d48b1c6df64ea17821c84943b720f83eb7999d4c9"} Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.241222 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.252957 4869 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-9f2tz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.253120 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz" podUID="5d54f896-380b-4253-bcba-8576673c606e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.265750 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pn7q" event={"ID":"77070268-41bc-4f03-bddd-5470d23e03b8","Type":"ContainerStarted","Data":"7be66a156d2bc59bb79afe803c26301e98029f6e7ddfda32482f95183a8b456c"} Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.278743 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw" event={"ID":"43f6cf85-1016-4b61-b95a-9ad7c66bb29f","Type":"ContainerStarted","Data":"c1066cd4f87de1004607d9ef923935985cbe0d2d85c58407e8093adf8bb55c15"} Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.281184 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:54 crc kubenswrapper[4869]: E0130 21:45:54.287747 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:54.787705009 +0000 UTC m=+155.673463034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.337297 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" podStartSLOduration=134.337267412 podStartE2EDuration="2m14.337267412s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:54.277713595 +0000 UTC m=+155.163471640" watchObservedRunningTime="2026-01-30 21:45:54.337267412 +0000 UTC m=+155.223025437" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.346271 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v5kfz" podStartSLOduration=134.346244235 podStartE2EDuration="2m14.346244235s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:54.335777654 +0000 UTC m=+155.221535679" watchObservedRunningTime="2026-01-30 21:45:54.346244235 +0000 UTC m=+155.232002260" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.356387 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4kznw" event={"ID":"8922ebf0-c10f-4963-b517-e51ba2284e99","Type":"ContainerStarted","Data":"4090ed20dac7e64c7d6ecd87995a6ac6aa9c1fab15800ff197dcc4502fe2daa2"} Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.382068 4869 generic.go:334] "Generic (PLEG): container finished" podID="6d91a880-3f38-492b-b797-6fc24c2da65e" containerID="a7fbf3996a0e4c698939628d014ddbe9222273f182aa58fd04ac14132557e554" exitCode=0 Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.382320 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" event={"ID":"6d91a880-3f38-492b-b797-6fc24c2da65e","Type":"ContainerDied","Data":"a7fbf3996a0e4c698939628d014ddbe9222273f182aa58fd04ac14132557e554"} Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.383145 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" podStartSLOduration=134.383122336 podStartE2EDuration="2m14.383122336s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:54.380536525 +0000 UTC m=+155.266294550" watchObservedRunningTime="2026-01-30 21:45:54.383122336 +0000 UTC m=+155.268880361" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.390538 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:54 crc kubenswrapper[4869]: E0130 21:45:54.391183 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:54.891160489 +0000 UTC m=+155.776918514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.398588 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5dlg7" event={"ID":"71313c5e-2597-4c09-89c3-02a17afaeda5","Type":"ContainerStarted","Data":"781598b3f753b502eff4364d5a5be6b811ec34e6d1d145708ad66185da69b141"} Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.399616 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-k4rff container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.399658 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k4rff" podUID="49697d03-35f3-4fa3-9141-2bb8ae8eccab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.439112 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6tz69" podStartSLOduration=134.439073469 podStartE2EDuration="2m14.439073469s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:54.432180393 +0000 UTC m=+155.317938418" watchObservedRunningTime="2026-01-30 21:45:54.439073469 +0000 UTC m=+155.324831494" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.442042 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-kgmf9" podStartSLOduration=134.442024342 podStartE2EDuration="2m14.442024342s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:54.413600617 +0000 UTC m=+155.299358652" watchObservedRunningTime="2026-01-30 21:45:54.442024342 +0000 UTC m=+155.327782367" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.457107 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.457387 4869 csr.go:261] certificate signing request csr-js2x7 is approved, waiting to be issued Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.457432 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2zzvw container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.457461 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2zzvw" podUID="3764a402-d82c-498f-91ab-91f657d352c6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.472785 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv" podStartSLOduration=134.472768271 podStartE2EDuration="2m14.472768271s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:54.472124102 +0000 UTC m=+155.357882127" watchObservedRunningTime="2026-01-30 21:45:54.472768271 +0000 UTC m=+155.358526296" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.475219 4869 csr.go:257] certificate signing request csr-js2x7 is issued Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.499330 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:54 crc kubenswrapper[4869]: E0130 21:45:54.499478 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:54.999444182 +0000 UTC m=+155.885202207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.500374 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:54 crc kubenswrapper[4869]: E0130 21:45:54.500739 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:55.000723932 +0000 UTC m=+155.886481957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.516755 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4p68r" podStartSLOduration=134.516696506 podStartE2EDuration="2m14.516696506s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:54.51556327 +0000 UTC m=+155.401321295" watchObservedRunningTime="2026-01-30 21:45:54.516696506 +0000 UTC m=+155.402454531" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.601806 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:54 crc kubenswrapper[4869]: E0130 21:45:54.602161 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:55.102120018 +0000 UTC m=+155.987878043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.602651 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:54 crc kubenswrapper[4869]: E0130 21:45:54.603018 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:55.103003196 +0000 UTC m=+155.988761221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.613953 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-2zzvw" podStartSLOduration=134.61392093 podStartE2EDuration="2m14.61392093s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:54.55808848 +0000 UTC m=+155.443846525" watchObservedRunningTime="2026-01-30 21:45:54.61392093 +0000 UTC m=+155.499678955" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.632974 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" podStartSLOduration=134.632947729 podStartE2EDuration="2m14.632947729s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:54.601294671 +0000 UTC m=+155.487052696" watchObservedRunningTime="2026-01-30 21:45:54.632947729 +0000 UTC m=+155.518705744" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.669415 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2wk5x" podStartSLOduration=134.669396508 podStartE2EDuration="2m14.669396508s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:54.634608422 +0000 UTC m=+155.520366447" watchObservedRunningTime="2026-01-30 21:45:54.669396508 +0000 UTC m=+155.555154533" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.676985 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz" podStartSLOduration=134.676976276 podStartE2EDuration="2m14.676976276s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:54.667650123 +0000 UTC m=+155.553408148" watchObservedRunningTime="2026-01-30 21:45:54.676976276 +0000 UTC m=+155.562734301" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.703872 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:54 crc kubenswrapper[4869]: E0130 21:45:54.704350 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:55.204330919 +0000 UTC m=+156.090088954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.746922 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-txphr" podStartSLOduration=5.746887649 podStartE2EDuration="5.746887649s" podCreationTimestamp="2026-01-30 21:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:54.706117075 +0000 UTC m=+155.591875110" watchObservedRunningTime="2026-01-30 21:45:54.746887649 +0000 UTC m=+155.632645664" Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.805345 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:54 crc kubenswrapper[4869]: E0130 21:45:54.805774 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:55.305762475 +0000 UTC m=+156.191520500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:54 crc kubenswrapper[4869]: I0130 21:45:54.906422 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:54 crc kubenswrapper[4869]: E0130 21:45:54.907035 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:55.407016175 +0000 UTC m=+156.292774200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.007836 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:55 crc kubenswrapper[4869]: E0130 21:45:55.008169 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:55.508156393 +0000 UTC m=+156.393914418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.108708 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:55 crc kubenswrapper[4869]: E0130 21:45:55.108845 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:55.608821835 +0000 UTC m=+156.494579860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.109484 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:55 crc kubenswrapper[4869]: E0130 21:45:55.109813 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:55.609805226 +0000 UTC m=+156.495563311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.137367 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8brpl" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.210652 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:55 crc kubenswrapper[4869]: E0130 21:45:55.211092 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:55.711060268 +0000 UTC m=+156.596818293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.312288 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:55 crc kubenswrapper[4869]: E0130 21:45:55.312791 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:55.812768422 +0000 UTC m=+156.698526447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.405411 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m95d6" event={"ID":"99023de1-0d21-49eb-b133-1403f9224808","Type":"ContainerStarted","Data":"f277cb915fd4c8db7c82871f983beb6061cc07fa4bf8db37dee7195b31ebaffd"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.405456 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m95d6" event={"ID":"99023de1-0d21-49eb-b133-1403f9224808","Type":"ContainerStarted","Data":"b356c7845593c364628337c273868bf97e9fd8cf16cc86015915f475fc6fa1e0"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.407336 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brwlx" event={"ID":"ed1ad681-31ef-4081-9b80-acae8a1e58ac","Type":"ContainerStarted","Data":"187a75cece434f009c2949572990429bd34c84c9ef14b729ecc2528fbfd350df"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.407422 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brwlx" event={"ID":"ed1ad681-31ef-4081-9b80-acae8a1e58ac","Type":"ContainerStarted","Data":"6176a017c43a4ce913a47cd856d3582f61f2bb4dd649d6bf85cd5b1fc24d401f"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.408698 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" event={"ID":"8b2bd278-ed0e-4e4e-90a6-9286bc29a664","Type":"ContainerStarted","Data":"261a5a6598d90803f2d7a66da6d9edcfca0133fcb38e5535d12b9f238deb5ef8"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.409825 4869 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-px2s8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.409882 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" podUID="8b2bd278-ed0e-4e4e-90a6-9286bc29a664" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.411260 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" event={"ID":"e7c790ef-ae52-4809-b6f2-088811793867","Type":"ContainerStarted","Data":"d2c17cd7962638c1d69167c3957921e692a78b0a67de0c31a0afa22696cd0a23"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.411883 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.413196 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:55 crc kubenswrapper[4869]: E0130 21:45:55.413933 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:55.913887629 +0000 UTC m=+156.799645664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.414630 4869 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fgmnt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.32:6443/healthz\": dial tcp 10.217.0.32:6443: connect: connection refused" start-of-body= Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.414694 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" podUID="e7c790ef-ae52-4809-b6f2-088811793867" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.32:6443/healthz\": dial tcp 10.217.0.32:6443: connect: connection refused" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.416070 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nrn8v" event={"ID":"3f9e62b6-dc56-4339-9dd7-0d71b8df4053","Type":"ContainerStarted","Data":"9842880b4e5e88889d31a156819106ff0025a44f117bb1ee39b9b4d1d606bf66"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.416110 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nrn8v" event={"ID":"3f9e62b6-dc56-4339-9dd7-0d71b8df4053","Type":"ContainerStarted","Data":"60e87a0ed2684527acc82c903e99d29bd4fca0f4d9801d6c0760cf02a4c09653"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.417140 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" event={"ID":"f8d08a66-aeee-4eba-8436-4b124c45051a","Type":"ContainerStarted","Data":"61f90ac445d270f9076ee1e536095fe0321a83c5b3e29d30c8b1074e4ee64b41"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.418946 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jqxjt" event={"ID":"5feff7d0-1eb1-42d4-8891-6758fdbcb01f","Type":"ContainerStarted","Data":"c909f1ab96ccb3c2c8623e0614f70d9564ec0496ebe2967c3a9546de05d193cb"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.419972 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6bxlw" event={"ID":"492fddc8-5b29-4b32-9b4c-9831317fae23","Type":"ContainerStarted","Data":"ff7c4ab039d6dd8dd431670143240044fa7a6b50a7c88a64dbc3655b73ddb6d7"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.421274 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f" event={"ID":"1bb8d75c-15d1-495d-a2ec-4c4e873d45e3","Type":"ContainerStarted","Data":"a4b67baa67fcc81a68207b9b58e6f98315a032e5f246303fbd1a851e575a99f7"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.421303 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f" event={"ID":"1bb8d75c-15d1-495d-a2ec-4c4e873d45e3","Type":"ContainerStarted","Data":"59125d92256e175f8335b7b84495d1c8071ffef1742455525a2f92f7af86d6d4"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.422682 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kkgzt" event={"ID":"636fcb63-d2ed-4d3a-84c8-caf785e27f12","Type":"ContainerStarted","Data":"51d352c6f1a8d47bc12499c79e8a081409b0c8a381d34ffc09014c3becb0f06e"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.423842 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4t6ks" event={"ID":"8b3074a3-1e32-4024-bcd1-0a7365f9b92a","Type":"ContainerStarted","Data":"a8ecbd005d13bf86795c3106180ebdeadcc6ad48af2c42edd8522178aafd2b13"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.425801 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4kznw" event={"ID":"8922ebf0-c10f-4963-b517-e51ba2284e99","Type":"ContainerStarted","Data":"21b3aca5ed6078df3d19030dd8c61926ef92a2d9d5b5c3fd463e2d11001fe3b6"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.427180 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-v5sqv" event={"ID":"215f240b-da02-40a5-b1f4-c5cfe17407b6","Type":"ContainerStarted","Data":"098369f7ab0f973141acd10f2a1523038fe527a6a79f8c204f1c616007c696ef"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.428183 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p" event={"ID":"98f852c4-a74a-4153-a095-136d3ef7d5c2","Type":"ContainerStarted","Data":"5171f32641127a25334e2c96861be11aab043dd8384fb6fed25dc159f04ce900"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.428402 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.430363 4869 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-q4k7p container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.430418 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p" podUID="98f852c4-a74a-4153-a095-136d3ef7d5c2" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.430527 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" event={"ID":"fca2992a-2cb5-4b86-9cfe-66d8dae76acb","Type":"ContainerStarted","Data":"9766d8dfe52e4fbd53538cdf3bef77f8e1e3437d7f110649ae1e2c5861e301f1"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.430567 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" event={"ID":"fca2992a-2cb5-4b86-9cfe-66d8dae76acb","Type":"ContainerStarted","Data":"888bf27858c12005f23724ca3347569a893d95cd06241988a5bd02bc3418537f"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.430587 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.431946 4869 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-b86ql container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.431983 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" podUID="fca2992a-2cb5-4b86-9cfe-66d8dae76acb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.432189 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" event={"ID":"6d91a880-3f38-492b-b797-6fc24c2da65e","Type":"ContainerStarted","Data":"4f8e0501f272206c912adb374eac44f72d87101c344eb2bb86da4c794a719d48"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.433268 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5dlg7" event={"ID":"71313c5e-2597-4c09-89c3-02a17afaeda5","Type":"ContainerStarted","Data":"c56f5cb0f6d65be7f06a4f7520340e46f2fa08f83b168295e82e2d9ca55d95a8"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.434636 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pn7q" event={"ID":"77070268-41bc-4f03-bddd-5470d23e03b8","Type":"ContainerStarted","Data":"2e10dc668a509e5c72e2e0f5968a5aef46d9e525910959d6606f1963f3950c1e"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.434673 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pn7q" event={"ID":"77070268-41bc-4f03-bddd-5470d23e03b8","Type":"ContainerStarted","Data":"a3c5094667e992d6b184ace136098b32113e4967e358c535c7cad24c2b16ec81"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.434773 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pn7q" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.435750 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw" event={"ID":"43f6cf85-1016-4b61-b95a-9ad7c66bb29f","Type":"ContainerStarted","Data":"2f62f264d8262cf23977910469a2aa2e9344f23d979501289318b766fb14f547"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.436941 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-mfmhf" event={"ID":"9b85629c-9742-4a56-b91a-601938afd139","Type":"ContainerStarted","Data":"dbe4704f408f680f2833c124348901a854f82d492e3a3489036dc87dca042133"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.436967 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-mfmhf" event={"ID":"9b85629c-9742-4a56-b91a-601938afd139","Type":"ContainerStarted","Data":"f24b36ddba65a97f4926eb67e509c543d57dd958ded5978dede93907f54dfd8a"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.439300 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hgz5t" event={"ID":"d151f3f9-526d-40a2-8021-13aa609295b1","Type":"ContainerStarted","Data":"0d487b24d6b2daed2ee3ec6eaf5ae47d8c2f03e99d31fc48c7496dc1c8d6e01a"} Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.439659 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-k4rff container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.439708 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k4rff" podUID="49697d03-35f3-4fa3-9141-2bb8ae8eccab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.439912 4869 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-9f2tz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.439945 4869 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9zpxh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.439952 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz" podUID="5d54f896-380b-4253-bcba-8576673c606e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.439963 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" podUID="f460e85e-82b0-4ae3-b8f2-c9ec043b0dbc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.440023 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t4q8n container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.440040 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" podUID="20c24fb1-6b9c-4688-bb3e-6bc97fce9856" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.483983 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 21:40:54 +0000 UTC, rotation deadline is 2026-12-19 14:20:38.742758046 +0000 UTC Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.484026 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7744h34m43.258734007s for next certificate rotation Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.490880 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2zzvw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:45:55 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 30 21:45:55 crc kubenswrapper[4869]: [+]process-running ok Jan 30 21:45:55 crc kubenswrapper[4869]: healthz check failed Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.490958 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2zzvw" podUID="3764a402-d82c-498f-91ab-91f657d352c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.514404 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m95d6" podStartSLOduration=135.514384816 podStartE2EDuration="2m15.514384816s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.457580696 +0000 UTC m=+156.343338721" watchObservedRunningTime="2026-01-30 21:45:55.514384816 +0000 UTC m=+156.400142841" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.514502 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:55 crc kubenswrapper[4869]: E0130 21:45:55.515573 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:56.015557303 +0000 UTC m=+156.901315328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.576218 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p" podStartSLOduration=135.576200293 podStartE2EDuration="2m15.576200293s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.526100955 +0000 UTC m=+156.411858980" watchObservedRunningTime="2026-01-30 21:45:55.576200293 +0000 UTC m=+156.461958308" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.616983 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:55 crc kubenswrapper[4869]: E0130 21:45:55.617197 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:56.117169955 +0000 UTC m=+157.002927980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.617603 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:55 crc kubenswrapper[4869]: E0130 21:45:55.617986 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:56.11796742 +0000 UTC m=+157.003725445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.619984 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-bgcv6" podStartSLOduration=135.619970163 podStartE2EDuration="2m15.619970163s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.577059051 +0000 UTC m=+156.462817076" watchObservedRunningTime="2026-01-30 21:45:55.619970163 +0000 UTC m=+156.505728178" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.621732 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-hgz5t" podStartSLOduration=135.621725938 podStartE2EDuration="2m15.621725938s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.621193681 +0000 UTC m=+156.506951706" watchObservedRunningTime="2026-01-30 21:45:55.621725938 +0000 UTC m=+156.507483963" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.641235 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-mfmhf" podStartSLOduration=6.641217632 podStartE2EDuration="6.641217632s" podCreationTimestamp="2026-01-30 21:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.639563691 +0000 UTC m=+156.525321716" watchObservedRunningTime="2026-01-30 21:45:55.641217632 +0000 UTC m=+156.526975657" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.705787 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kkgzt" podStartSLOduration=135.705766867 podStartE2EDuration="2m15.705766867s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.658756855 +0000 UTC m=+156.544514880" watchObservedRunningTime="2026-01-30 21:45:55.705766867 +0000 UTC m=+156.591524892" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.706070 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-4kznw" podStartSLOduration=135.706064056 podStartE2EDuration="2m15.706064056s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.702161473 +0000 UTC m=+156.587919498" watchObservedRunningTime="2026-01-30 21:45:55.706064056 +0000 UTC m=+156.591822081" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.723797 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:55 crc kubenswrapper[4869]: E0130 21:45:55.724301 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:56.22428002 +0000 UTC m=+157.110038045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.725045 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" podStartSLOduration=135.725020233 podStartE2EDuration="2m15.725020233s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.724021371 +0000 UTC m=+156.609779396" watchObservedRunningTime="2026-01-30 21:45:55.725020233 +0000 UTC m=+156.610778258" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.751607 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pn7q" podStartSLOduration=135.751591111 podStartE2EDuration="2m15.751591111s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.749394921 +0000 UTC m=+156.635152946" watchObservedRunningTime="2026-01-30 21:45:55.751591111 +0000 UTC m=+156.637349136" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.774998 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw" podStartSLOduration=55.774977437 podStartE2EDuration="55.774977437s" podCreationTimestamp="2026-01-30 21:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.772385536 +0000 UTC m=+156.658143561" watchObservedRunningTime="2026-01-30 21:45:55.774977437 +0000 UTC m=+156.660735462" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.795734 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jqxjt" podStartSLOduration=135.795709671 podStartE2EDuration="2m15.795709671s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.793347407 +0000 UTC m=+156.679105452" watchObservedRunningTime="2026-01-30 21:45:55.795709671 +0000 UTC m=+156.681467696" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.809774 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-v5sqv" podStartSLOduration=135.809757474 podStartE2EDuration="2m15.809757474s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.807856123 +0000 UTC m=+156.693614148" watchObservedRunningTime="2026-01-30 21:45:55.809757474 +0000 UTC m=+156.695515499" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.825480 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:55 crc kubenswrapper[4869]: E0130 21:45:55.825840 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:56.32582337 +0000 UTC m=+157.211581395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.831466 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" podStartSLOduration=135.831436177 podStartE2EDuration="2m15.831436177s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.82835824 +0000 UTC m=+156.714116265" watchObservedRunningTime="2026-01-30 21:45:55.831436177 +0000 UTC m=+156.717194202" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.869392 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5j8f" podStartSLOduration=135.869373332 podStartE2EDuration="2m15.869373332s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.849532017 +0000 UTC m=+156.735290042" watchObservedRunningTime="2026-01-30 21:45:55.869373332 +0000 UTC m=+156.755131357" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.891635 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brwlx" podStartSLOduration=135.891614173 podStartE2EDuration="2m15.891614173s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.890080015 +0000 UTC m=+156.775838040" watchObservedRunningTime="2026-01-30 21:45:55.891614173 +0000 UTC m=+156.777372198" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.893003 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" podStartSLOduration=135.892992816 podStartE2EDuration="2m15.892992816s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.872108528 +0000 UTC m=+156.757866553" watchObservedRunningTime="2026-01-30 21:45:55.892992816 +0000 UTC m=+156.778750861" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.921984 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6bxlw" podStartSLOduration=135.921954509 podStartE2EDuration="2m15.921954509s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.9092933 +0000 UTC m=+156.795051325" watchObservedRunningTime="2026-01-30 21:45:55.921954509 +0000 UTC m=+156.807712554" Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.926544 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:55 crc kubenswrapper[4869]: E0130 21:45:55.927163 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:56.427138573 +0000 UTC m=+157.312896598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.927253 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:55 crc kubenswrapper[4869]: E0130 21:45:55.927616 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:56.427604117 +0000 UTC m=+157.313362142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:55 crc kubenswrapper[4869]: I0130 21:45:55.941727 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5dlg7" podStartSLOduration=135.941706392 podStartE2EDuration="2m15.941706392s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.939283376 +0000 UTC m=+156.825041411" watchObservedRunningTime="2026-01-30 21:45:55.941706392 +0000 UTC m=+156.827464417" Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.028071 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:56 crc kubenswrapper[4869]: E0130 21:45:56.028249 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:56.528218298 +0000 UTC m=+157.413976323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.028384 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:56 crc kubenswrapper[4869]: E0130 21:45:56.028710 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:56.528703083 +0000 UTC m=+157.414461108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.128957 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:56 crc kubenswrapper[4869]: E0130 21:45:56.129143 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:56.629105317 +0000 UTC m=+157.514863342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.129270 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:56 crc kubenswrapper[4869]: E0130 21:45:56.129651 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:56.629640404 +0000 UTC m=+157.515398429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.135846 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.135942 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.137314 4869 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4kznw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.137372 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4kznw" podUID="8922ebf0-c10f-4963-b517-e51ba2284e99" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.229738 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:56 crc kubenswrapper[4869]: E0130 21:45:56.229865 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:56.729844122 +0000 UTC m=+157.615602147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.230269 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:56 crc kubenswrapper[4869]: E0130 21:45:56.230807 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:56.73078157 +0000 UTC m=+157.616539595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.331320 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:56 crc kubenswrapper[4869]: E0130 21:45:56.331562 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:56.831522205 +0000 UTC m=+157.717280230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.331619 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:56 crc kubenswrapper[4869]: E0130 21:45:56.332058 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:56.832050952 +0000 UTC m=+157.717808977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.432400 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:56 crc kubenswrapper[4869]: E0130 21:45:56.432591 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:56.932556209 +0000 UTC m=+157.818314254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.432754 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:56 crc kubenswrapper[4869]: E0130 21:45:56.433153 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:56.933142037 +0000 UTC m=+157.818900062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.449605 4869 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fgmnt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.32:6443/healthz\": dial tcp 10.217.0.32:6443: connect: connection refused" start-of-body= Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.449653 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" podUID="e7c790ef-ae52-4809-b6f2-088811793867" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.32:6443/healthz\": dial tcp 10.217.0.32:6443: connect: connection refused" Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.449731 4869 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-px2s8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.449784 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" podUID="8b2bd278-ed0e-4e4e-90a6-9286bc29a664" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.450349 4869 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-b86ql container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.450394 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" podUID="fca2992a-2cb5-4b86-9cfe-66d8dae76acb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.450352 4869 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-q4k7p container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.450442 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p" podUID="98f852c4-a74a-4153-a095-136d3ef7d5c2" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.453025 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gs5md" event={"ID":"f2aeffca-8872-4e12-8651-4fa2fe16be8e","Type":"ContainerStarted","Data":"b0268c30c873a5f2d464a015bcd1336ba47bb0ec54f1d76c0c0a55e9746a02ff"} Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.458884 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2zzvw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:45:56 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 30 21:45:56 crc kubenswrapper[4869]: [+]process-running ok Jan 30 21:45:56 crc kubenswrapper[4869]: healthz check failed Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.458965 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2zzvw" podUID="3764a402-d82c-498f-91ab-91f657d352c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.479860 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9f2tz" Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.488082 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-nrn8v" podStartSLOduration=7.488065048 podStartE2EDuration="7.488065048s" podCreationTimestamp="2026-01-30 21:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:56.487394898 +0000 UTC m=+157.373152923" watchObservedRunningTime="2026-01-30 21:45:56.488065048 +0000 UTC m=+157.373823073" Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.489083 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4t6ks" podStartSLOduration=136.48907588 podStartE2EDuration="2m16.48907588s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:55.967857495 +0000 UTC m=+156.853615520" watchObservedRunningTime="2026-01-30 21:45:56.48907588 +0000 UTC m=+157.374833905" Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.534062 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:56 crc kubenswrapper[4869]: E0130 21:45:56.534149 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:57.03412744 +0000 UTC m=+157.919885465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.535908 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:56 crc kubenswrapper[4869]: E0130 21:45:56.571104 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:57.071083055 +0000 UTC m=+157.956841080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.639501 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:56 crc kubenswrapper[4869]: E0130 21:45:56.640020 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:57.139995776 +0000 UTC m=+158.025753801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.677435 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.678140 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.680788 4869 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-w587f container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.23:8443/livez\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.680852 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" podUID="6d91a880-3f38-492b-b797-6fc24c2da65e" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.23:8443/livez\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.741322 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:56 crc kubenswrapper[4869]: E0130 21:45:56.742127 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:57.242108664 +0000 UTC m=+158.127866689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.745637 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.844302 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:56 crc kubenswrapper[4869]: E0130 21:45:56.844884 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:57.344858532 +0000 UTC m=+158.230616557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.845473 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:56 crc kubenswrapper[4869]: E0130 21:45:56.845931 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:57.345919706 +0000 UTC m=+158.231677741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.946919 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:56 crc kubenswrapper[4869]: E0130 21:45:56.947283 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:57.447257158 +0000 UTC m=+158.333015183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:56 crc kubenswrapper[4869]: I0130 21:45:56.947667 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:56 crc kubenswrapper[4869]: E0130 21:45:56.948001 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:57.447991322 +0000 UTC m=+158.333749347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:57 crc kubenswrapper[4869]: I0130 21:45:57.049041 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:57 crc kubenswrapper[4869]: E0130 21:45:57.049422 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:57.549399787 +0000 UTC m=+158.435157812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:57 crc kubenswrapper[4869]: I0130 21:45:57.049694 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:57 crc kubenswrapper[4869]: E0130 21:45:57.050006 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:57.549999296 +0000 UTC m=+158.435757321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:57 crc kubenswrapper[4869]: I0130 21:45:57.151707 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:57 crc kubenswrapper[4869]: E0130 21:45:57.152156 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:57.652119874 +0000 UTC m=+158.537877899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:57 crc kubenswrapper[4869]: I0130 21:45:57.252976 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:57 crc kubenswrapper[4869]: E0130 21:45:57.253441 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:57.753424237 +0000 UTC m=+158.639182262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:57 crc kubenswrapper[4869]: I0130 21:45:57.329957 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cjzmv" Jan 30 21:45:57 crc kubenswrapper[4869]: E0130 21:45:57.354720 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:57.854695548 +0000 UTC m=+158.740453583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:57 crc kubenswrapper[4869]: I0130 21:45:57.354604 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:57 crc kubenswrapper[4869]: E0130 21:45:57.358080 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:57.858071395 +0000 UTC m=+158.743829420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:57 crc kubenswrapper[4869]: I0130 21:45:57.358170 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:57 crc kubenswrapper[4869]: I0130 21:45:57.458973 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2zzvw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:45:57 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 30 21:45:57 crc kubenswrapper[4869]: [+]process-running ok Jan 30 21:45:57 crc kubenswrapper[4869]: healthz check failed Jan 30 21:45:57 crc kubenswrapper[4869]: I0130 21:45:57.459316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:57 crc kubenswrapper[4869]: E0130 21:45:57.459396 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:57.959373167 +0000 UTC m=+158.845131202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:57 crc kubenswrapper[4869]: I0130 21:45:57.459793 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2zzvw" podUID="3764a402-d82c-498f-91ab-91f657d352c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:45:57 crc kubenswrapper[4869]: I0130 21:45:57.460135 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:57 crc kubenswrapper[4869]: E0130 21:45:57.460591 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:57.960572664 +0000 UTC m=+158.846330689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:57 crc kubenswrapper[4869]: I0130 21:45:57.561237 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:57 crc kubenswrapper[4869]: E0130 21:45:57.562386 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:58.062359982 +0000 UTC m=+158.948118007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:57 crc kubenswrapper[4869]: I0130 21:45:57.562645 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:57 crc kubenswrapper[4869]: E0130 21:45:57.563054 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:58.063034074 +0000 UTC m=+158.948792099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:57 crc kubenswrapper[4869]: I0130 21:45:57.666910 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:57 crc kubenswrapper[4869]: E0130 21:45:57.667352 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:58.16733326 +0000 UTC m=+159.053091285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:57 crc kubenswrapper[4869]: I0130 21:45:57.768985 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:57 crc kubenswrapper[4869]: E0130 21:45:57.769318 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:58.269304953 +0000 UTC m=+159.155062978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:57 crc kubenswrapper[4869]: I0130 21:45:57.851646 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:45:57 crc kubenswrapper[4869]: I0130 21:45:57.869833 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:57 crc kubenswrapper[4869]: E0130 21:45:57.870235 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:58.370213903 +0000 UTC m=+159.255971918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:57 crc kubenswrapper[4869]: I0130 21:45:57.970913 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:57 crc kubenswrapper[4869]: E0130 21:45:57.971356 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:58.47134106 +0000 UTC m=+159.357099095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.072582 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:58 crc kubenswrapper[4869]: E0130 21:45:58.072881 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:58.572864379 +0000 UTC m=+159.458622394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.174572 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:58 crc kubenswrapper[4869]: E0130 21:45:58.174986 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:58.674968597 +0000 UTC m=+159.560726622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.276313 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:58 crc kubenswrapper[4869]: E0130 21:45:58.276476 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:58.776449645 +0000 UTC m=+159.662207670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.276525 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:58 crc kubenswrapper[4869]: E0130 21:45:58.276813 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:58.776800736 +0000 UTC m=+159.662558761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.377316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:58 crc kubenswrapper[4869]: E0130 21:45:58.377722 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:58.877699106 +0000 UTC m=+159.763457131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.465021 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2zzvw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:45:58 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 30 21:45:58 crc kubenswrapper[4869]: [+]process-running ok Jan 30 21:45:58 crc kubenswrapper[4869]: healthz check failed Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.465099 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2zzvw" podUID="3764a402-d82c-498f-91ab-91f657d352c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.468167 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-nrn8v" Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.479279 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:58 crc kubenswrapper[4869]: E0130 21:45:58.481671 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:58.981659761 +0000 UTC m=+159.867417786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.580377 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:58 crc kubenswrapper[4869]: E0130 21:45:58.580645 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:59.08062479 +0000 UTC m=+159.966382815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.580980 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:58 crc kubenswrapper[4869]: E0130 21:45:58.581284 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:59.081276731 +0000 UTC m=+159.967034756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.681662 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:58 crc kubenswrapper[4869]: E0130 21:45:58.682092 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:59.182072067 +0000 UTC m=+160.067830092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.783331 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:58 crc kubenswrapper[4869]: E0130 21:45:58.783938 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:59.283919726 +0000 UTC m=+160.169677751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.838077 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b2g6w"] Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.839834 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2g6w" Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.845971 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.856304 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b2g6w"] Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.889306 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.889942 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02-catalog-content\") pod \"community-operators-b2g6w\" (UID: \"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02\") " pod="openshift-marketplace/community-operators-b2g6w" Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.890090 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02-utilities\") pod \"community-operators-b2g6w\" (UID: \"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02\") " pod="openshift-marketplace/community-operators-b2g6w" Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.890221 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m2t8\" (UniqueName: \"kubernetes.io/projected/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02-kube-api-access-7m2t8\") pod \"community-operators-b2g6w\" (UID: \"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02\") " pod="openshift-marketplace/community-operators-b2g6w" Jan 30 21:45:58 crc kubenswrapper[4869]: E0130 21:45:58.899491 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:59.399445107 +0000 UTC m=+160.285203132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.991575 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m2t8\" (UniqueName: \"kubernetes.io/projected/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02-kube-api-access-7m2t8\") pod \"community-operators-b2g6w\" (UID: \"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02\") " pod="openshift-marketplace/community-operators-b2g6w" Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.991642 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02-catalog-content\") pod \"community-operators-b2g6w\" (UID: \"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02\") " pod="openshift-marketplace/community-operators-b2g6w" Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.991722 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02-utilities\") pod \"community-operators-b2g6w\" (UID: \"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02\") " pod="openshift-marketplace/community-operators-b2g6w" Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.991771 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:58 crc kubenswrapper[4869]: E0130 21:45:58.992184 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:59.492165929 +0000 UTC m=+160.377923954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.992810 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02-utilities\") pod \"community-operators-b2g6w\" (UID: \"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02\") " pod="openshift-marketplace/community-operators-b2g6w" Jan 30 21:45:58 crc kubenswrapper[4869]: I0130 21:45:58.992953 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02-catalog-content\") pod \"community-operators-b2g6w\" (UID: \"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02\") " pod="openshift-marketplace/community-operators-b2g6w" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.012188 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6v7j4"] Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.014277 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6v7j4" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.017600 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.022756 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m2t8\" (UniqueName: \"kubernetes.io/projected/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02-kube-api-access-7m2t8\") pod \"community-operators-b2g6w\" (UID: \"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02\") " pod="openshift-marketplace/community-operators-b2g6w" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.025110 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6v7j4"] Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.093135 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.093372 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ebfcb0e-58a5-4ab1-894f-1a6093921531-utilities\") pod \"certified-operators-6v7j4\" (UID: \"4ebfcb0e-58a5-4ab1-894f-1a6093921531\") " pod="openshift-marketplace/certified-operators-6v7j4" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.093399 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2q9f\" (UniqueName: \"kubernetes.io/projected/4ebfcb0e-58a5-4ab1-894f-1a6093921531-kube-api-access-c2q9f\") pod \"certified-operators-6v7j4\" (UID: \"4ebfcb0e-58a5-4ab1-894f-1a6093921531\") " pod="openshift-marketplace/certified-operators-6v7j4" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.093459 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ebfcb0e-58a5-4ab1-894f-1a6093921531-catalog-content\") pod \"certified-operators-6v7j4\" (UID: \"4ebfcb0e-58a5-4ab1-894f-1a6093921531\") " pod="openshift-marketplace/certified-operators-6v7j4" Jan 30 21:45:59 crc kubenswrapper[4869]: E0130 21:45:59.093540 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:59.593502222 +0000 UTC m=+160.479260257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.158086 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2g6w" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.195521 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ebfcb0e-58a5-4ab1-894f-1a6093921531-utilities\") pod \"certified-operators-6v7j4\" (UID: \"4ebfcb0e-58a5-4ab1-894f-1a6093921531\") " pod="openshift-marketplace/certified-operators-6v7j4" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.195560 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2q9f\" (UniqueName: \"kubernetes.io/projected/4ebfcb0e-58a5-4ab1-894f-1a6093921531-kube-api-access-c2q9f\") pod \"certified-operators-6v7j4\" (UID: \"4ebfcb0e-58a5-4ab1-894f-1a6093921531\") " pod="openshift-marketplace/certified-operators-6v7j4" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.195589 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.195629 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ebfcb0e-58a5-4ab1-894f-1a6093921531-catalog-content\") pod \"certified-operators-6v7j4\" (UID: \"4ebfcb0e-58a5-4ab1-894f-1a6093921531\") " pod="openshift-marketplace/certified-operators-6v7j4" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.196119 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ebfcb0e-58a5-4ab1-894f-1a6093921531-utilities\") pod \"certified-operators-6v7j4\" (UID: \"4ebfcb0e-58a5-4ab1-894f-1a6093921531\") " pod="openshift-marketplace/certified-operators-6v7j4" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.196128 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ebfcb0e-58a5-4ab1-894f-1a6093921531-catalog-content\") pod \"certified-operators-6v7j4\" (UID: \"4ebfcb0e-58a5-4ab1-894f-1a6093921531\") " pod="openshift-marketplace/certified-operators-6v7j4" Jan 30 21:45:59 crc kubenswrapper[4869]: E0130 21:45:59.196174 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:59.696153787 +0000 UTC m=+160.581911812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.224677 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2q9f\" (UniqueName: \"kubernetes.io/projected/4ebfcb0e-58a5-4ab1-894f-1a6093921531-kube-api-access-c2q9f\") pod \"certified-operators-6v7j4\" (UID: \"4ebfcb0e-58a5-4ab1-894f-1a6093921531\") " pod="openshift-marketplace/certified-operators-6v7j4" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.226947 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nlmmn"] Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.228043 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlmmn" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.242639 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nlmmn"] Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.296515 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.296775 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfs9c\" (UniqueName: \"kubernetes.io/projected/98ffc112-5fb6-4001-b071-2df7e3d90fd2-kube-api-access-nfs9c\") pod \"community-operators-nlmmn\" (UID: \"98ffc112-5fb6-4001-b071-2df7e3d90fd2\") " pod="openshift-marketplace/community-operators-nlmmn" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.296810 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98ffc112-5fb6-4001-b071-2df7e3d90fd2-catalog-content\") pod \"community-operators-nlmmn\" (UID: \"98ffc112-5fb6-4001-b071-2df7e3d90fd2\") " pod="openshift-marketplace/community-operators-nlmmn" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.296847 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98ffc112-5fb6-4001-b071-2df7e3d90fd2-utilities\") pod \"community-operators-nlmmn\" (UID: \"98ffc112-5fb6-4001-b071-2df7e3d90fd2\") " pod="openshift-marketplace/community-operators-nlmmn" Jan 30 21:45:59 crc kubenswrapper[4869]: E0130 21:45:59.297052 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:45:59.797028546 +0000 UTC m=+160.682786571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.347622 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6v7j4" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.397823 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfs9c\" (UniqueName: \"kubernetes.io/projected/98ffc112-5fb6-4001-b071-2df7e3d90fd2-kube-api-access-nfs9c\") pod \"community-operators-nlmmn\" (UID: \"98ffc112-5fb6-4001-b071-2df7e3d90fd2\") " pod="openshift-marketplace/community-operators-nlmmn" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.398348 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98ffc112-5fb6-4001-b071-2df7e3d90fd2-catalog-content\") pod \"community-operators-nlmmn\" (UID: \"98ffc112-5fb6-4001-b071-2df7e3d90fd2\") " pod="openshift-marketplace/community-operators-nlmmn" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.398392 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98ffc112-5fb6-4001-b071-2df7e3d90fd2-utilities\") pod \"community-operators-nlmmn\" (UID: \"98ffc112-5fb6-4001-b071-2df7e3d90fd2\") " pod="openshift-marketplace/community-operators-nlmmn" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.398435 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:59 crc kubenswrapper[4869]: E0130 21:45:59.398771 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:45:59.898756792 +0000 UTC m=+160.784514817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.399330 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98ffc112-5fb6-4001-b071-2df7e3d90fd2-catalog-content\") pod \"community-operators-nlmmn\" (UID: \"98ffc112-5fb6-4001-b071-2df7e3d90fd2\") " pod="openshift-marketplace/community-operators-nlmmn" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.400253 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98ffc112-5fb6-4001-b071-2df7e3d90fd2-utilities\") pod \"community-operators-nlmmn\" (UID: \"98ffc112-5fb6-4001-b071-2df7e3d90fd2\") " pod="openshift-marketplace/community-operators-nlmmn" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.419404 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jkgvk"] Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.420735 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jkgvk" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.430981 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfs9c\" (UniqueName: \"kubernetes.io/projected/98ffc112-5fb6-4001-b071-2df7e3d90fd2-kube-api-access-nfs9c\") pod \"community-operators-nlmmn\" (UID: \"98ffc112-5fb6-4001-b071-2df7e3d90fd2\") " pod="openshift-marketplace/community-operators-nlmmn" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.450314 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jkgvk"] Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.477832 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2zzvw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:45:59 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 30 21:45:59 crc kubenswrapper[4869]: [+]process-running ok Jan 30 21:45:59 crc kubenswrapper[4869]: healthz check failed Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.477888 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2zzvw" podUID="3764a402-d82c-498f-91ab-91f657d352c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.500563 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.500785 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lldpt\" (UniqueName: \"kubernetes.io/projected/2f93f920-68d0-41d1-8a20-ca174eda2fcd-kube-api-access-lldpt\") pod \"certified-operators-jkgvk\" (UID: \"2f93f920-68d0-41d1-8a20-ca174eda2fcd\") " pod="openshift-marketplace/certified-operators-jkgvk" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.500822 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f93f920-68d0-41d1-8a20-ca174eda2fcd-utilities\") pod \"certified-operators-jkgvk\" (UID: \"2f93f920-68d0-41d1-8a20-ca174eda2fcd\") " pod="openshift-marketplace/certified-operators-jkgvk" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.500936 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f93f920-68d0-41d1-8a20-ca174eda2fcd-catalog-content\") pod \"certified-operators-jkgvk\" (UID: \"2f93f920-68d0-41d1-8a20-ca174eda2fcd\") " pod="openshift-marketplace/certified-operators-jkgvk" Jan 30 21:45:59 crc kubenswrapper[4869]: E0130 21:45:59.501059 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:46:00.001037424 +0000 UTC m=+160.886795449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.510568 4869 generic.go:334] "Generic (PLEG): container finished" podID="43f6cf85-1016-4b61-b95a-9ad7c66bb29f" containerID="2f62f264d8262cf23977910469a2aa2e9344f23d979501289318b766fb14f547" exitCode=0 Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.510624 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw" event={"ID":"43f6cf85-1016-4b61-b95a-9ad7c66bb29f","Type":"ContainerDied","Data":"2f62f264d8262cf23977910469a2aa2e9344f23d979501289318b766fb14f547"} Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.560083 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlmmn" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.602786 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lldpt\" (UniqueName: \"kubernetes.io/projected/2f93f920-68d0-41d1-8a20-ca174eda2fcd-kube-api-access-lldpt\") pod \"certified-operators-jkgvk\" (UID: \"2f93f920-68d0-41d1-8a20-ca174eda2fcd\") " pod="openshift-marketplace/certified-operators-jkgvk" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.602831 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f93f920-68d0-41d1-8a20-ca174eda2fcd-utilities\") pod \"certified-operators-jkgvk\" (UID: \"2f93f920-68d0-41d1-8a20-ca174eda2fcd\") " pod="openshift-marketplace/certified-operators-jkgvk" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.602914 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.602989 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f93f920-68d0-41d1-8a20-ca174eda2fcd-catalog-content\") pod \"certified-operators-jkgvk\" (UID: \"2f93f920-68d0-41d1-8a20-ca174eda2fcd\") " pod="openshift-marketplace/certified-operators-jkgvk" Jan 30 21:45:59 crc kubenswrapper[4869]: E0130 21:45:59.609057 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:46:00.109040108 +0000 UTC m=+160.994798133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.617356 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f93f920-68d0-41d1-8a20-ca174eda2fcd-catalog-content\") pod \"certified-operators-jkgvk\" (UID: \"2f93f920-68d0-41d1-8a20-ca174eda2fcd\") " pod="openshift-marketplace/certified-operators-jkgvk" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.619209 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f93f920-68d0-41d1-8a20-ca174eda2fcd-utilities\") pod \"certified-operators-jkgvk\" (UID: \"2f93f920-68d0-41d1-8a20-ca174eda2fcd\") " pod="openshift-marketplace/certified-operators-jkgvk" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.664690 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lldpt\" (UniqueName: \"kubernetes.io/projected/2f93f920-68d0-41d1-8a20-ca174eda2fcd-kube-api-access-lldpt\") pod \"certified-operators-jkgvk\" (UID: \"2f93f920-68d0-41d1-8a20-ca174eda2fcd\") " pod="openshift-marketplace/certified-operators-jkgvk" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.703576 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:59 crc kubenswrapper[4869]: E0130 21:45:59.704215 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:46:00.204194577 +0000 UTC m=+161.089952602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.754182 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jkgvk" Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.806431 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:45:59 crc kubenswrapper[4869]: E0130 21:45:59.806783 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:46:00.306764909 +0000 UTC m=+161.192522934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.912561 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:45:59 crc kubenswrapper[4869]: E0130 21:45:59.912993 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:46:00.412975606 +0000 UTC m=+161.298733631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:45:59 crc kubenswrapper[4869]: I0130 21:45:59.968385 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b2g6w"] Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.037688 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:46:00 crc kubenswrapper[4869]: E0130 21:46:00.038137 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:46:00.538118579 +0000 UTC m=+161.423876604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.065353 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6v7j4"] Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.138334 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:46:00 crc kubenswrapper[4869]: E0130 21:46:00.139245 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:46:00.639227615 +0000 UTC m=+161.524985640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.192233 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nlmmn"] Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.239717 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:46:00 crc kubenswrapper[4869]: E0130 21:46:00.240497 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:46:00.740032962 +0000 UTC m=+161.625790977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.303363 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jkgvk"] Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.307245 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.316670 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.316793 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.327933 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.328225 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.341372 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:46:00 crc kubenswrapper[4869]: E0130 21:46:00.342253 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:46:00.842219653 +0000 UTC m=+161.727977738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.442850 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.442971 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a518d718-eb86-48bf-83ba-b6465b09ff50-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a518d718-eb86-48bf-83ba-b6465b09ff50\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.442997 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a518d718-eb86-48bf-83ba-b6465b09ff50-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a518d718-eb86-48bf-83ba-b6465b09ff50\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:46:00 crc kubenswrapper[4869]: E0130 21:46:00.443316 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:46:00.943301988 +0000 UTC m=+161.829060013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.460881 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2zzvw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:46:00 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 30 21:46:00 crc kubenswrapper[4869]: [+]process-running ok Jan 30 21:46:00 crc kubenswrapper[4869]: healthz check failed Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.460957 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2zzvw" podUID="3764a402-d82c-498f-91ab-91f657d352c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.517230 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkgvk" event={"ID":"2f93f920-68d0-41d1-8a20-ca174eda2fcd","Type":"ContainerStarted","Data":"8d0cc067cb5f2446063d684c8da24bf903124cb4c6084336a0aa5230411234a7"} Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.524492 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6v7j4" event={"ID":"4ebfcb0e-58a5-4ab1-894f-1a6093921531","Type":"ContainerStarted","Data":"4d4e4c8b7475355f0d485adf4c9eea933864e2df61cdd8774f504cc426c3bb3a"} Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.526371 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gs5md" event={"ID":"f2aeffca-8872-4e12-8651-4fa2fe16be8e","Type":"ContainerStarted","Data":"5f338c00d648ddd4f2c00f658c6dcdb54e6508b4467575d5aa684e0f36102275"} Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.527052 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlmmn" event={"ID":"98ffc112-5fb6-4001-b071-2df7e3d90fd2","Type":"ContainerStarted","Data":"6de9e15db559cbbe174523a5489db956fa0e54b42e208953c6fe31da878ea49a"} Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.532914 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2g6w" event={"ID":"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02","Type":"ContainerStarted","Data":"3acc01a17eef1a4e05360ce28fd5d642e8521042e0052ea8fa9a9a5b5c342794"} Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.532956 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2g6w" event={"ID":"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02","Type":"ContainerStarted","Data":"971ded5e14869f2b606b95e3eca85789d95c5aad0ed0ad345f21e4536da82905"} Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.544463 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.544600 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a518d718-eb86-48bf-83ba-b6465b09ff50-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a518d718-eb86-48bf-83ba-b6465b09ff50\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.544624 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a518d718-eb86-48bf-83ba-b6465b09ff50-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a518d718-eb86-48bf-83ba-b6465b09ff50\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.544679 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a518d718-eb86-48bf-83ba-b6465b09ff50-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a518d718-eb86-48bf-83ba-b6465b09ff50\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:46:00 crc kubenswrapper[4869]: E0130 21:46:00.544711 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:46:01.044679512 +0000 UTC m=+161.930437537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.544963 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:46:00 crc kubenswrapper[4869]: E0130 21:46:00.545357 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:46:01.045350103 +0000 UTC m=+161.931108128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.571785 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a518d718-eb86-48bf-83ba-b6465b09ff50-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a518d718-eb86-48bf-83ba-b6465b09ff50\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.645597 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:46:00 crc kubenswrapper[4869]: E0130 21:46:00.645727 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:46:01.145699815 +0000 UTC m=+162.031457850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.645850 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:46:00 crc kubenswrapper[4869]: E0130 21:46:00.646156 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:46:01.14614841 +0000 UTC m=+162.031906435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.657715 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.747585 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:46:00 crc kubenswrapper[4869]: E0130 21:46:00.748178 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:46:01.248153394 +0000 UTC m=+162.133911429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.807669 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rr69r"] Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.809079 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rr69r" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.811175 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.820171 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rr69r"] Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.842114 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.849435 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:46:00 crc kubenswrapper[4869]: E0130 21:46:00.851285 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:46:01.351272784 +0000 UTC m=+162.237030809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.878868 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 21:46:00 crc kubenswrapper[4869]: E0130 21:46:00.910821 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ebfcb0e_58a5_4ab1_894f_1a6093921531.slice/crio-conmon-8af38011d1e1fc94050eede5dcd98e753fe3a170e05b3363de5ffd21158228f3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98ffc112_5fb6_4001_b071_2df7e3d90fd2.slice/crio-932b5329c5999ef2d1f4256d38f55de123ae7b1a5cc8d83b06afacffd52808a4.scope\": RecentStats: unable to find data in memory cache]" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.951044 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43f6cf85-1016-4b61-b95a-9ad7c66bb29f-config-volume\") pod \"43f6cf85-1016-4b61-b95a-9ad7c66bb29f\" (UID: \"43f6cf85-1016-4b61-b95a-9ad7c66bb29f\") " Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.951111 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tlts\" (UniqueName: \"kubernetes.io/projected/43f6cf85-1016-4b61-b95a-9ad7c66bb29f-kube-api-access-6tlts\") pod \"43f6cf85-1016-4b61-b95a-9ad7c66bb29f\" (UID: \"43f6cf85-1016-4b61-b95a-9ad7c66bb29f\") " Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.951227 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43f6cf85-1016-4b61-b95a-9ad7c66bb29f-secret-volume\") pod \"43f6cf85-1016-4b61-b95a-9ad7c66bb29f\" (UID: \"43f6cf85-1016-4b61-b95a-9ad7c66bb29f\") " Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.952101 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43f6cf85-1016-4b61-b95a-9ad7c66bb29f-config-volume" (OuterVolumeSpecName: "config-volume") pod "43f6cf85-1016-4b61-b95a-9ad7c66bb29f" (UID: "43f6cf85-1016-4b61-b95a-9ad7c66bb29f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.952330 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.952593 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b394f841-ea61-41c7-9b4b-7ad185073b70-catalog-content\") pod \"redhat-marketplace-rr69r\" (UID: \"b394f841-ea61-41c7-9b4b-7ad185073b70\") " pod="openshift-marketplace/redhat-marketplace-rr69r" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.952667 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b394f841-ea61-41c7-9b4b-7ad185073b70-utilities\") pod \"redhat-marketplace-rr69r\" (UID: \"b394f841-ea61-41c7-9b4b-7ad185073b70\") " pod="openshift-marketplace/redhat-marketplace-rr69r" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.952769 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwcmb\" (UniqueName: \"kubernetes.io/projected/b394f841-ea61-41c7-9b4b-7ad185073b70-kube-api-access-lwcmb\") pod \"redhat-marketplace-rr69r\" (UID: \"b394f841-ea61-41c7-9b4b-7ad185073b70\") " pod="openshift-marketplace/redhat-marketplace-rr69r" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.952976 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43f6cf85-1016-4b61-b95a-9ad7c66bb29f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:00 crc kubenswrapper[4869]: E0130 21:46:00.953044 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:46:01.453021721 +0000 UTC m=+162.338779776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.957557 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43f6cf85-1016-4b61-b95a-9ad7c66bb29f-kube-api-access-6tlts" (OuterVolumeSpecName: "kube-api-access-6tlts") pod "43f6cf85-1016-4b61-b95a-9ad7c66bb29f" (UID: "43f6cf85-1016-4b61-b95a-9ad7c66bb29f"). InnerVolumeSpecName "kube-api-access-6tlts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:46:00 crc kubenswrapper[4869]: I0130 21:46:00.957838 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f6cf85-1016-4b61-b95a-9ad7c66bb29f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "43f6cf85-1016-4b61-b95a-9ad7c66bb29f" (UID: "43f6cf85-1016-4b61-b95a-9ad7c66bb29f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.054499 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.054587 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b394f841-ea61-41c7-9b4b-7ad185073b70-catalog-content\") pod \"redhat-marketplace-rr69r\" (UID: \"b394f841-ea61-41c7-9b4b-7ad185073b70\") " pod="openshift-marketplace/redhat-marketplace-rr69r" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.054657 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b394f841-ea61-41c7-9b4b-7ad185073b70-utilities\") pod \"redhat-marketplace-rr69r\" (UID: \"b394f841-ea61-41c7-9b4b-7ad185073b70\") " pod="openshift-marketplace/redhat-marketplace-rr69r" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.054694 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwcmb\" (UniqueName: \"kubernetes.io/projected/b394f841-ea61-41c7-9b4b-7ad185073b70-kube-api-access-lwcmb\") pod \"redhat-marketplace-rr69r\" (UID: \"b394f841-ea61-41c7-9b4b-7ad185073b70\") " pod="openshift-marketplace/redhat-marketplace-rr69r" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.054774 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tlts\" (UniqueName: \"kubernetes.io/projected/43f6cf85-1016-4b61-b95a-9ad7c66bb29f-kube-api-access-6tlts\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.054787 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43f6cf85-1016-4b61-b95a-9ad7c66bb29f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.055684 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b394f841-ea61-41c7-9b4b-7ad185073b70-utilities\") pod \"redhat-marketplace-rr69r\" (UID: \"b394f841-ea61-41c7-9b4b-7ad185073b70\") " pod="openshift-marketplace/redhat-marketplace-rr69r" Jan 30 21:46:01 crc kubenswrapper[4869]: E0130 21:46:01.056065 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:46:01.556047527 +0000 UTC m=+162.441805552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.057929 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b394f841-ea61-41c7-9b4b-7ad185073b70-catalog-content\") pod \"redhat-marketplace-rr69r\" (UID: \"b394f841-ea61-41c7-9b4b-7ad185073b70\") " pod="openshift-marketplace/redhat-marketplace-rr69r" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.070812 4869 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.073511 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwcmb\" (UniqueName: \"kubernetes.io/projected/b394f841-ea61-41c7-9b4b-7ad185073b70-kube-api-access-lwcmb\") pod \"redhat-marketplace-rr69r\" (UID: \"b394f841-ea61-41c7-9b4b-7ad185073b70\") " pod="openshift-marketplace/redhat-marketplace-rr69r" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.145709 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.151081 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-4kznw" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.156383 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:46:01 crc kubenswrapper[4869]: E0130 21:46:01.156868 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:46:01.656838543 +0000 UTC m=+162.542596568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.158228 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:46:01 crc kubenswrapper[4869]: E0130 21:46:01.158717 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:46:01.658698401 +0000 UTC m=+162.544456426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.186230 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.186288 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.190376 4869 patch_prober.go:28] interesting pod/console-f9d7485db-mq7xx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.190440 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-mq7xx" podUID="c31c9863-59b6-490a-95d1-53d4a9707117" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.205566 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-86lcz"] Jan 30 21:46:01 crc kubenswrapper[4869]: E0130 21:46:01.205793 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f6cf85-1016-4b61-b95a-9ad7c66bb29f" containerName="collect-profiles" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.205805 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f6cf85-1016-4b61-b95a-9ad7c66bb29f" containerName="collect-profiles" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.206119 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="43f6cf85-1016-4b61-b95a-9ad7c66bb29f" containerName="collect-profiles" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.206842 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-86lcz" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.240637 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-86lcz"] Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.259521 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.259636 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e1dcaa-1805-4157-8fd7-0e00177aaf4c-utilities\") pod \"redhat-marketplace-86lcz\" (UID: \"72e1dcaa-1805-4157-8fd7-0e00177aaf4c\") " pod="openshift-marketplace/redhat-marketplace-86lcz" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.259772 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtn52\" (UniqueName: \"kubernetes.io/projected/72e1dcaa-1805-4157-8fd7-0e00177aaf4c-kube-api-access-gtn52\") pod \"redhat-marketplace-86lcz\" (UID: \"72e1dcaa-1805-4157-8fd7-0e00177aaf4c\") " pod="openshift-marketplace/redhat-marketplace-86lcz" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.259820 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e1dcaa-1805-4157-8fd7-0e00177aaf4c-catalog-content\") pod \"redhat-marketplace-86lcz\" (UID: \"72e1dcaa-1805-4157-8fd7-0e00177aaf4c\") " pod="openshift-marketplace/redhat-marketplace-86lcz" Jan 30 21:46:01 crc kubenswrapper[4869]: E0130 21:46:01.259941 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:46:01.759923462 +0000 UTC m=+162.645681487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.280310 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rr69r" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.361050 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.361119 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtn52\" (UniqueName: \"kubernetes.io/projected/72e1dcaa-1805-4157-8fd7-0e00177aaf4c-kube-api-access-gtn52\") pod \"redhat-marketplace-86lcz\" (UID: \"72e1dcaa-1805-4157-8fd7-0e00177aaf4c\") " pod="openshift-marketplace/redhat-marketplace-86lcz" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.361147 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e1dcaa-1805-4157-8fd7-0e00177aaf4c-catalog-content\") pod \"redhat-marketplace-86lcz\" (UID: \"72e1dcaa-1805-4157-8fd7-0e00177aaf4c\") " pod="openshift-marketplace/redhat-marketplace-86lcz" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.361172 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e1dcaa-1805-4157-8fd7-0e00177aaf4c-utilities\") pod \"redhat-marketplace-86lcz\" (UID: \"72e1dcaa-1805-4157-8fd7-0e00177aaf4c\") " pod="openshift-marketplace/redhat-marketplace-86lcz" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.361541 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e1dcaa-1805-4157-8fd7-0e00177aaf4c-utilities\") pod \"redhat-marketplace-86lcz\" (UID: \"72e1dcaa-1805-4157-8fd7-0e00177aaf4c\") " pod="openshift-marketplace/redhat-marketplace-86lcz" Jan 30 21:46:01 crc kubenswrapper[4869]: E0130 21:46:01.361766 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:46:01.861755181 +0000 UTC m=+162.747513206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.362388 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e1dcaa-1805-4157-8fd7-0e00177aaf4c-catalog-content\") pod \"redhat-marketplace-86lcz\" (UID: \"72e1dcaa-1805-4157-8fd7-0e00177aaf4c\") " pod="openshift-marketplace/redhat-marketplace-86lcz" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.396467 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtn52\" (UniqueName: \"kubernetes.io/projected/72e1dcaa-1805-4157-8fd7-0e00177aaf4c-kube-api-access-gtn52\") pod \"redhat-marketplace-86lcz\" (UID: \"72e1dcaa-1805-4157-8fd7-0e00177aaf4c\") " pod="openshift-marketplace/redhat-marketplace-86lcz" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.459289 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2zzvw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:46:01 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 30 21:46:01 crc kubenswrapper[4869]: [+]process-running ok Jan 30 21:46:01 crc kubenswrapper[4869]: healthz check failed Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.459339 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2zzvw" podUID="3764a402-d82c-498f-91ab-91f657d352c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.461715 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:46:01 crc kubenswrapper[4869]: E0130 21:46:01.461864 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:46:01.961839914 +0000 UTC m=+162.847597939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.499878 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rr69r"] Jan 30 21:46:01 crc kubenswrapper[4869]: W0130 21:46:01.509488 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb394f841_ea61_41c7_9b4b_7ad185073b70.slice/crio-04a4be37398446f96aa39027422ba11ed4e3b6d9aa04fc7c9944012408725b2b WatchSource:0}: Error finding container 04a4be37398446f96aa39027422ba11ed4e3b6d9aa04fc7c9944012408725b2b: Status 404 returned error can't find the container with id 04a4be37398446f96aa39027422ba11ed4e3b6d9aa04fc7c9944012408725b2b Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.533033 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-86lcz" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.541334 4869 generic.go:334] "Generic (PLEG): container finished" podID="a79f57ed-ffe5-4f65-acd5-0bcd42e47a02" containerID="3acc01a17eef1a4e05360ce28fd5d642e8521042e0052ea8fa9a9a5b5c342794" exitCode=0 Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.541432 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2g6w" event={"ID":"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02","Type":"ContainerDied","Data":"3acc01a17eef1a4e05360ce28fd5d642e8521042e0052ea8fa9a9a5b5c342794"} Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.547817 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw" event={"ID":"43f6cf85-1016-4b61-b95a-9ad7c66bb29f","Type":"ContainerDied","Data":"c1066cd4f87de1004607d9ef923935985cbe0d2d85c58407e8093adf8bb55c15"} Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.547857 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1066cd4f87de1004607d9ef923935985cbe0d2d85c58407e8093adf8bb55c15" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.547972 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-lf7bw" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.551470 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f93f920-68d0-41d1-8a20-ca174eda2fcd" containerID="59a37c2e91b00a785cdb839e037e3619dcefa814a42b76761168156aa1a86761" exitCode=0 Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.551595 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkgvk" event={"ID":"2f93f920-68d0-41d1-8a20-ca174eda2fcd","Type":"ContainerDied","Data":"59a37c2e91b00a785cdb839e037e3619dcefa814a42b76761168156aa1a86761"} Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.559081 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.563213 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:46:01 crc kubenswrapper[4869]: E0130 21:46:01.563690 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:46:02.063673423 +0000 UTC m=+162.949431448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.579191 4869 generic.go:334] "Generic (PLEG): container finished" podID="4ebfcb0e-58a5-4ab1-894f-1a6093921531" containerID="8af38011d1e1fc94050eede5dcd98e753fe3a170e05b3363de5ffd21158228f3" exitCode=0 Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.579266 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6v7j4" event={"ID":"4ebfcb0e-58a5-4ab1-894f-1a6093921531","Type":"ContainerDied","Data":"8af38011d1e1fc94050eede5dcd98e753fe3a170e05b3363de5ffd21158228f3"} Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.587788 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr69r" event={"ID":"b394f841-ea61-41c7-9b4b-7ad185073b70","Type":"ContainerStarted","Data":"04a4be37398446f96aa39027422ba11ed4e3b6d9aa04fc7c9944012408725b2b"} Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.598705 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gs5md" event={"ID":"f2aeffca-8872-4e12-8651-4fa2fe16be8e","Type":"ContainerStarted","Data":"b11e8c1cd9eb86480a8289338e2f21c745eedddc2fdb54fd4b932691f2555319"} Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.598748 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gs5md" event={"ID":"f2aeffca-8872-4e12-8651-4fa2fe16be8e","Type":"ContainerStarted","Data":"f985b737fd2f1d1eb52ee7009d293a17e6d1472ee02f7047028b09e2afe90c5e"} Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.606637 4869 generic.go:334] "Generic (PLEG): container finished" podID="98ffc112-5fb6-4001-b071-2df7e3d90fd2" containerID="932b5329c5999ef2d1f4256d38f55de123ae7b1a5cc8d83b06afacffd52808a4" exitCode=0 Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.606701 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlmmn" event={"ID":"98ffc112-5fb6-4001-b071-2df7e3d90fd2","Type":"ContainerDied","Data":"932b5329c5999ef2d1f4256d38f55de123ae7b1a5cc8d83b06afacffd52808a4"} Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.609887 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a518d718-eb86-48bf-83ba-b6465b09ff50","Type":"ContainerStarted","Data":"4ec9c05b913df3b0530c5dc897a09c637e82b19963dd729d201d2ae3f0d6493e"} Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.609965 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a518d718-eb86-48bf-83ba-b6465b09ff50","Type":"ContainerStarted","Data":"b4d2186a8197ee41be0887c60d3061ee988d18dce36b3d7e5ed15101a2a52f6e"} Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.645489 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.6454697409999999 podStartE2EDuration="1.645469741s" podCreationTimestamp="2026-01-30 21:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:46:01.644703277 +0000 UTC m=+162.530461312" watchObservedRunningTime="2026-01-30 21:46:01.645469741 +0000 UTC m=+162.531227756" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.648963 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-k4rff container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.649040 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-k4rff" podUID="49697d03-35f3-4fa3-9141-2bb8ae8eccab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.652149 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-k4rff container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.652329 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k4rff" podUID="49697d03-35f3-4fa3-9141-2bb8ae8eccab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.666742 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:46:01 crc kubenswrapper[4869]: E0130 21:46:01.666946 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:46:02.166913037 +0000 UTC m=+163.052671052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.667052 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:46:01 crc kubenswrapper[4869]: E0130 21:46:01.667609 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:46:02.167593818 +0000 UTC m=+163.053351843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.725671 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.732800 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gs5md" podStartSLOduration=12.732783272 podStartE2EDuration="12.732783272s" podCreationTimestamp="2026-01-30 21:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:46:01.679560075 +0000 UTC m=+162.565318110" watchObservedRunningTime="2026-01-30 21:46:01.732783272 +0000 UTC m=+162.618541297" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.742304 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w587f" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.768419 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:46:01 crc kubenswrapper[4869]: E0130 21:46:01.768726 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:46:02.268691904 +0000 UTC m=+163.154449929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.874013 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:46:01 crc kubenswrapper[4869]: E0130 21:46:01.874354 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:46:02.374338674 +0000 UTC m=+163.260096699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j58b4" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.895966 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-86lcz"] Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.977441 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:46:01 crc kubenswrapper[4869]: E0130 21:46:01.977855 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:46:02.477717191 +0000 UTC m=+163.363475216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.986168 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.990264 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:46:01 crc kubenswrapper[4869]: I0130 21:46:01.990319 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.029008 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9zpxh" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.037931 4869 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T21:46:01.070853264Z","Handler":null,"Name":""} Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.044160 4869 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.044197 4869 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.078755 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.088597 4869 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.088661 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.137557 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j58b4\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.180231 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.186413 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.202455 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tsp88"] Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.203736 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsp88" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.209974 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.215599 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.219348 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tsp88"] Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.283662 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3-utilities\") pod \"redhat-operators-tsp88\" (UID: \"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3\") " pod="openshift-marketplace/redhat-operators-tsp88" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.283831 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3-catalog-content\") pod \"redhat-operators-tsp88\" (UID: \"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3\") " pod="openshift-marketplace/redhat-operators-tsp88" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.283877 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj9dd\" (UniqueName: \"kubernetes.io/projected/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3-kube-api-access-xj9dd\") pod \"redhat-operators-tsp88\" (UID: \"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3\") " pod="openshift-marketplace/redhat-operators-tsp88" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.383386 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q4k7p" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.384599 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj9dd\" (UniqueName: \"kubernetes.io/projected/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3-kube-api-access-xj9dd\") pod \"redhat-operators-tsp88\" (UID: \"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3\") " pod="openshift-marketplace/redhat-operators-tsp88" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.384626 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3-catalog-content\") pod \"redhat-operators-tsp88\" (UID: \"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3\") " pod="openshift-marketplace/redhat-operators-tsp88" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.384683 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3-utilities\") pod \"redhat-operators-tsp88\" (UID: \"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3\") " pod="openshift-marketplace/redhat-operators-tsp88" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.385187 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3-utilities\") pod \"redhat-operators-tsp88\" (UID: \"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3\") " pod="openshift-marketplace/redhat-operators-tsp88" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.385488 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3-catalog-content\") pod \"redhat-operators-tsp88\" (UID: \"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3\") " pod="openshift-marketplace/redhat-operators-tsp88" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.415861 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj9dd\" (UniqueName: \"kubernetes.io/projected/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3-kube-api-access-xj9dd\") pod \"redhat-operators-tsp88\" (UID: \"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3\") " pod="openshift-marketplace/redhat-operators-tsp88" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.455248 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.460267 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2zzvw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:46:02 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 30 21:46:02 crc kubenswrapper[4869]: [+]process-running ok Jan 30 21:46:02 crc kubenswrapper[4869]: healthz check failed Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.460369 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2zzvw" podUID="3764a402-d82c-498f-91ab-91f657d352c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.478793 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j58b4"] Jan 30 21:46:02 crc kubenswrapper[4869]: W0130 21:46:02.512772 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcff2bdad_4c6e_44bc_977a_376e09638df1.slice/crio-963affe4e9bdb0930a031924482bddc80eeff748caf3f306e00f7599d557468a WatchSource:0}: Error finding container 963affe4e9bdb0930a031924482bddc80eeff748caf3f306e00f7599d557468a: Status 404 returned error can't find the container with id 963affe4e9bdb0930a031924482bddc80eeff748caf3f306e00f7599d557468a Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.525994 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsp88" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.588499 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs\") pod \"network-metrics-daemon-45w6p\" (UID: \"b980f4db-64d3-48c9-9ff8-18f23c4888cd\") " pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.604763 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b980f4db-64d3-48c9-9ff8-18f23c4888cd-metrics-certs\") pod \"network-metrics-daemon-45w6p\" (UID: \"b980f4db-64d3-48c9-9ff8-18f23c4888cd\") " pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.610405 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w5fzr"] Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.612184 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w5fzr" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.617915 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-45w6p" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.627678 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w5fzr"] Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.636201 4869 generic.go:334] "Generic (PLEG): container finished" podID="b394f841-ea61-41c7-9b4b-7ad185073b70" containerID="ce83adaaaac4cdc0b56959703785f8f7ae028e2bda9f5ae42e3667822932d150" exitCode=0 Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.636578 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr69r" event={"ID":"b394f841-ea61-41c7-9b4b-7ad185073b70","Type":"ContainerDied","Data":"ce83adaaaac4cdc0b56959703785f8f7ae028e2bda9f5ae42e3667822932d150"} Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.640638 4869 generic.go:334] "Generic (PLEG): container finished" podID="72e1dcaa-1805-4157-8fd7-0e00177aaf4c" containerID="c1e086b18182e15352da21f44e1778ef5b89e5673ec387e66e954209de1ae9ff" exitCode=0 Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.640733 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-86lcz" event={"ID":"72e1dcaa-1805-4157-8fd7-0e00177aaf4c","Type":"ContainerDied","Data":"c1e086b18182e15352da21f44e1778ef5b89e5673ec387e66e954209de1ae9ff"} Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.640773 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-86lcz" event={"ID":"72e1dcaa-1805-4157-8fd7-0e00177aaf4c","Type":"ContainerStarted","Data":"7aa22409dc662f9e522fbe0a2a055b7e4f2ea0b9373ce7a325667b4a5b226fa5"} Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.643923 4869 generic.go:334] "Generic (PLEG): container finished" podID="a518d718-eb86-48bf-83ba-b6465b09ff50" containerID="4ec9c05b913df3b0530c5dc897a09c637e82b19963dd729d201d2ae3f0d6493e" exitCode=0 Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.643990 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a518d718-eb86-48bf-83ba-b6465b09ff50","Type":"ContainerDied","Data":"4ec9c05b913df3b0530c5dc897a09c637e82b19963dd729d201d2ae3f0d6493e"} Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.654888 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" event={"ID":"cff2bdad-4c6e-44bc-977a-376e09638df1","Type":"ContainerStarted","Data":"963affe4e9bdb0930a031924482bddc80eeff748caf3f306e00f7599d557468a"} Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.690732 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdk4s\" (UniqueName: \"kubernetes.io/projected/85baedd6-e513-4741-98cf-ef39cfda8ecb-kube-api-access-jdk4s\") pod \"redhat-operators-w5fzr\" (UID: \"85baedd6-e513-4741-98cf-ef39cfda8ecb\") " pod="openshift-marketplace/redhat-operators-w5fzr" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.690814 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85baedd6-e513-4741-98cf-ef39cfda8ecb-utilities\") pod \"redhat-operators-w5fzr\" (UID: \"85baedd6-e513-4741-98cf-ef39cfda8ecb\") " pod="openshift-marketplace/redhat-operators-w5fzr" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.690833 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85baedd6-e513-4741-98cf-ef39cfda8ecb-catalog-content\") pod \"redhat-operators-w5fzr\" (UID: \"85baedd6-e513-4741-98cf-ef39cfda8ecb\") " pod="openshift-marketplace/redhat-operators-w5fzr" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.699128 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.800297 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdk4s\" (UniqueName: \"kubernetes.io/projected/85baedd6-e513-4741-98cf-ef39cfda8ecb-kube-api-access-jdk4s\") pod \"redhat-operators-w5fzr\" (UID: \"85baedd6-e513-4741-98cf-ef39cfda8ecb\") " pod="openshift-marketplace/redhat-operators-w5fzr" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.800351 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85baedd6-e513-4741-98cf-ef39cfda8ecb-utilities\") pod \"redhat-operators-w5fzr\" (UID: \"85baedd6-e513-4741-98cf-ef39cfda8ecb\") " pod="openshift-marketplace/redhat-operators-w5fzr" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.800373 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85baedd6-e513-4741-98cf-ef39cfda8ecb-catalog-content\") pod \"redhat-operators-w5fzr\" (UID: \"85baedd6-e513-4741-98cf-ef39cfda8ecb\") " pod="openshift-marketplace/redhat-operators-w5fzr" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.800816 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85baedd6-e513-4741-98cf-ef39cfda8ecb-catalog-content\") pod \"redhat-operators-w5fzr\" (UID: \"85baedd6-e513-4741-98cf-ef39cfda8ecb\") " pod="openshift-marketplace/redhat-operators-w5fzr" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.801348 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85baedd6-e513-4741-98cf-ef39cfda8ecb-utilities\") pod \"redhat-operators-w5fzr\" (UID: \"85baedd6-e513-4741-98cf-ef39cfda8ecb\") " pod="openshift-marketplace/redhat-operators-w5fzr" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.851721 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdk4s\" (UniqueName: \"kubernetes.io/projected/85baedd6-e513-4741-98cf-ef39cfda8ecb-kube-api-access-jdk4s\") pod \"redhat-operators-w5fzr\" (UID: \"85baedd6-e513-4741-98cf-ef39cfda8ecb\") " pod="openshift-marketplace/redhat-operators-w5fzr" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.938244 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w5fzr" Jan 30 21:46:02 crc kubenswrapper[4869]: I0130 21:46:02.998331 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tsp88"] Jan 30 21:46:03 crc kubenswrapper[4869]: W0130 21:46:03.063966 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d2f572b_c0d3_4479_aaa0_a4e210d5e8e3.slice/crio-98700267ce47feed373823b3bdfe9409b839c62aaf80fce9d33af523817eabd2 WatchSource:0}: Error finding container 98700267ce47feed373823b3bdfe9409b839c62aaf80fce9d33af523817eabd2: Status 404 returned error can't find the container with id 98700267ce47feed373823b3bdfe9409b839c62aaf80fce9d33af523817eabd2 Jan 30 21:46:03 crc kubenswrapper[4869]: I0130 21:46:03.116559 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-45w6p"] Jan 30 21:46:03 crc kubenswrapper[4869]: I0130 21:46:03.460974 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2zzvw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:46:03 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 30 21:46:03 crc kubenswrapper[4869]: [+]process-running ok Jan 30 21:46:03 crc kubenswrapper[4869]: healthz check failed Jan 30 21:46:03 crc kubenswrapper[4869]: I0130 21:46:03.461443 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2zzvw" podUID="3764a402-d82c-498f-91ab-91f657d352c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:46:03 crc kubenswrapper[4869]: I0130 21:46:03.484788 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w5fzr"] Jan 30 21:46:03 crc kubenswrapper[4869]: I0130 21:46:03.667874 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" event={"ID":"cff2bdad-4c6e-44bc-977a-376e09638df1","Type":"ContainerStarted","Data":"56e409bc26eee10fe3f1fcc90fb807f96dc79c2051a6f26a0a32c0e85ebee8fc"} Jan 30 21:46:03 crc kubenswrapper[4869]: I0130 21:46:03.668131 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:46:03 crc kubenswrapper[4869]: I0130 21:46:03.677863 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-45w6p" event={"ID":"b980f4db-64d3-48c9-9ff8-18f23c4888cd","Type":"ContainerStarted","Data":"25068ec2aa5d5e3d7083750ff7a686a0491145ed0d38706bd7b38171d5a8605d"} Jan 30 21:46:03 crc kubenswrapper[4869]: I0130 21:46:03.677965 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-45w6p" event={"ID":"b980f4db-64d3-48c9-9ff8-18f23c4888cd","Type":"ContainerStarted","Data":"c1100a50c1fcda88847701809ea70bbe8c53ea0345f906f0b2d900dabbefb8ca"} Jan 30 21:46:03 crc kubenswrapper[4869]: I0130 21:46:03.681768 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5fzr" event={"ID":"85baedd6-e513-4741-98cf-ef39cfda8ecb","Type":"ContainerStarted","Data":"7383834f164503e0a83b8fe1eb186a4df9ca6c5564360e49952b523d2aab5a0e"} Jan 30 21:46:03 crc kubenswrapper[4869]: I0130 21:46:03.688511 4869 generic.go:334] "Generic (PLEG): container finished" podID="0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3" containerID="421a3316b5543bc753352ad25fa04eba21113597087cd9d0f98ce1ea86b2c4de" exitCode=0 Jan 30 21:46:03 crc kubenswrapper[4869]: I0130 21:46:03.698455 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsp88" event={"ID":"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3","Type":"ContainerDied","Data":"421a3316b5543bc753352ad25fa04eba21113597087cd9d0f98ce1ea86b2c4de"} Jan 30 21:46:03 crc kubenswrapper[4869]: I0130 21:46:03.698546 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsp88" event={"ID":"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3","Type":"ContainerStarted","Data":"98700267ce47feed373823b3bdfe9409b839c62aaf80fce9d33af523817eabd2"} Jan 30 21:46:03 crc kubenswrapper[4869]: I0130 21:46:03.700607 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" podStartSLOduration=143.700582222 podStartE2EDuration="2m23.700582222s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:46:03.693788498 +0000 UTC m=+164.579546523" watchObservedRunningTime="2026-01-30 21:46:03.700582222 +0000 UTC m=+164.586340247" Jan 30 21:46:03 crc kubenswrapper[4869]: I0130 21:46:03.898597 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.180215 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.242046 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a518d718-eb86-48bf-83ba-b6465b09ff50-kubelet-dir\") pod \"a518d718-eb86-48bf-83ba-b6465b09ff50\" (UID: \"a518d718-eb86-48bf-83ba-b6465b09ff50\") " Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.242165 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a518d718-eb86-48bf-83ba-b6465b09ff50-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a518d718-eb86-48bf-83ba-b6465b09ff50" (UID: "a518d718-eb86-48bf-83ba-b6465b09ff50"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.242836 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a518d718-eb86-48bf-83ba-b6465b09ff50-kube-api-access\") pod \"a518d718-eb86-48bf-83ba-b6465b09ff50\" (UID: \"a518d718-eb86-48bf-83ba-b6465b09ff50\") " Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.248914 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a518d718-eb86-48bf-83ba-b6465b09ff50-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.276159 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a518d718-eb86-48bf-83ba-b6465b09ff50-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a518d718-eb86-48bf-83ba-b6465b09ff50" (UID: "a518d718-eb86-48bf-83ba-b6465b09ff50"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.304749 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 21:46:04 crc kubenswrapper[4869]: E0130 21:46:04.305073 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a518d718-eb86-48bf-83ba-b6465b09ff50" containerName="pruner" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.305090 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a518d718-eb86-48bf-83ba-b6465b09ff50" containerName="pruner" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.305237 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a518d718-eb86-48bf-83ba-b6465b09ff50" containerName="pruner" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.305723 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.310137 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.313566 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.318763 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.350048 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba492aa2-ddfe-4603-bf9f-6be40a90f4c6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ba492aa2-ddfe-4603-bf9f-6be40a90f4c6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.350470 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba492aa2-ddfe-4603-bf9f-6be40a90f4c6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ba492aa2-ddfe-4603-bf9f-6be40a90f4c6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.350582 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a518d718-eb86-48bf-83ba-b6465b09ff50-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.455836 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba492aa2-ddfe-4603-bf9f-6be40a90f4c6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ba492aa2-ddfe-4603-bf9f-6be40a90f4c6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.455914 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba492aa2-ddfe-4603-bf9f-6be40a90f4c6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ba492aa2-ddfe-4603-bf9f-6be40a90f4c6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.456121 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba492aa2-ddfe-4603-bf9f-6be40a90f4c6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ba492aa2-ddfe-4603-bf9f-6be40a90f4c6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.464055 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2zzvw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:46:04 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 30 21:46:04 crc kubenswrapper[4869]: [+]process-running ok Jan 30 21:46:04 crc kubenswrapper[4869]: healthz check failed Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.464139 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2zzvw" podUID="3764a402-d82c-498f-91ab-91f657d352c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.475443 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-nrn8v" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.485975 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba492aa2-ddfe-4603-bf9f-6be40a90f4c6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ba492aa2-ddfe-4603-bf9f-6be40a90f4c6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.663986 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.736381 4869 generic.go:334] "Generic (PLEG): container finished" podID="85baedd6-e513-4741-98cf-ef39cfda8ecb" containerID="320f1241c0ca54543301c098bac50ae186cd97a4ad74a6748b4c88f66aff5d62" exitCode=0 Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.736494 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5fzr" event={"ID":"85baedd6-e513-4741-98cf-ef39cfda8ecb","Type":"ContainerDied","Data":"320f1241c0ca54543301c098bac50ae186cd97a4ad74a6748b4c88f66aff5d62"} Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.764225 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a518d718-eb86-48bf-83ba-b6465b09ff50","Type":"ContainerDied","Data":"b4d2186a8197ee41be0887c60d3061ee988d18dce36b3d7e5ed15101a2a52f6e"} Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.764310 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4d2186a8197ee41be0887c60d3061ee988d18dce36b3d7e5ed15101a2a52f6e" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.764513 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.809254 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-45w6p" event={"ID":"b980f4db-64d3-48c9-9ff8-18f23c4888cd","Type":"ContainerStarted","Data":"0f00cd411bbe78682c5879157a6b40c19edd59dedf03e672f62f9472f902d1e7"} Jan 30 21:46:04 crc kubenswrapper[4869]: I0130 21:46:04.870575 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-45w6p" podStartSLOduration=144.8705497 podStartE2EDuration="2m24.8705497s" podCreationTimestamp="2026-01-30 21:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:46:04.859582185 +0000 UTC m=+165.745340210" watchObservedRunningTime="2026-01-30 21:46:04.8705497 +0000 UTC m=+165.756307725" Jan 30 21:46:05 crc kubenswrapper[4869]: I0130 21:46:05.086261 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 21:46:05 crc kubenswrapper[4869]: W0130 21:46:05.118305 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podba492aa2_ddfe_4603_bf9f_6be40a90f4c6.slice/crio-7fa99e95734135dc7f97cf0ac84cc35156f72dda451391f67ced1ae60e628e09 WatchSource:0}: Error finding container 7fa99e95734135dc7f97cf0ac84cc35156f72dda451391f67ced1ae60e628e09: Status 404 returned error can't find the container with id 7fa99e95734135dc7f97cf0ac84cc35156f72dda451391f67ced1ae60e628e09 Jan 30 21:46:05 crc kubenswrapper[4869]: I0130 21:46:05.190222 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:46:05 crc kubenswrapper[4869]: I0130 21:46:05.460292 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:46:05 crc kubenswrapper[4869]: I0130 21:46:05.466640 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-2zzvw" Jan 30 21:46:05 crc kubenswrapper[4869]: I0130 21:46:05.931706 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ba492aa2-ddfe-4603-bf9f-6be40a90f4c6","Type":"ContainerStarted","Data":"7fa99e95734135dc7f97cf0ac84cc35156f72dda451391f67ced1ae60e628e09"} Jan 30 21:46:06 crc kubenswrapper[4869]: I0130 21:46:06.947520 4869 generic.go:334] "Generic (PLEG): container finished" podID="ba492aa2-ddfe-4603-bf9f-6be40a90f4c6" containerID="ec1b68928ac33eb8fc88027d35a2f7b0907d87b99681ce00f897a3e68233a2e7" exitCode=0 Jan 30 21:46:06 crc kubenswrapper[4869]: I0130 21:46:06.947612 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ba492aa2-ddfe-4603-bf9f-6be40a90f4c6","Type":"ContainerDied","Data":"ec1b68928ac33eb8fc88027d35a2f7b0907d87b99681ce00f897a3e68233a2e7"} Jan 30 21:46:11 crc kubenswrapper[4869]: I0130 21:46:11.199219 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:46:11 crc kubenswrapper[4869]: I0130 21:46:11.205174 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-mq7xx" Jan 30 21:46:11 crc kubenswrapper[4869]: I0130 21:46:11.647023 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-k4rff container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 30 21:46:11 crc kubenswrapper[4869]: I0130 21:46:11.647085 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-k4rff" podUID="49697d03-35f3-4fa3-9141-2bb8ae8eccab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 30 21:46:11 crc kubenswrapper[4869]: I0130 21:46:11.647432 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-k4rff container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 30 21:46:11 crc kubenswrapper[4869]: I0130 21:46:11.647545 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k4rff" podUID="49697d03-35f3-4fa3-9141-2bb8ae8eccab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 30 21:46:16 crc kubenswrapper[4869]: I0130 21:46:16.724655 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:46:16 crc kubenswrapper[4869]: I0130 21:46:16.923294 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba492aa2-ddfe-4603-bf9f-6be40a90f4c6-kube-api-access\") pod \"ba492aa2-ddfe-4603-bf9f-6be40a90f4c6\" (UID: \"ba492aa2-ddfe-4603-bf9f-6be40a90f4c6\") " Jan 30 21:46:16 crc kubenswrapper[4869]: I0130 21:46:16.923371 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba492aa2-ddfe-4603-bf9f-6be40a90f4c6-kubelet-dir\") pod \"ba492aa2-ddfe-4603-bf9f-6be40a90f4c6\" (UID: \"ba492aa2-ddfe-4603-bf9f-6be40a90f4c6\") " Jan 30 21:46:16 crc kubenswrapper[4869]: I0130 21:46:16.923771 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba492aa2-ddfe-4603-bf9f-6be40a90f4c6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ba492aa2-ddfe-4603-bf9f-6be40a90f4c6" (UID: "ba492aa2-ddfe-4603-bf9f-6be40a90f4c6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:46:16 crc kubenswrapper[4869]: I0130 21:46:16.929066 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba492aa2-ddfe-4603-bf9f-6be40a90f4c6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ba492aa2-ddfe-4603-bf9f-6be40a90f4c6" (UID: "ba492aa2-ddfe-4603-bf9f-6be40a90f4c6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:46:17 crc kubenswrapper[4869]: I0130 21:46:17.025771 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba492aa2-ddfe-4603-bf9f-6be40a90f4c6-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:17 crc kubenswrapper[4869]: I0130 21:46:17.025855 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba492aa2-ddfe-4603-bf9f-6be40a90f4c6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:17 crc kubenswrapper[4869]: I0130 21:46:17.064033 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ba492aa2-ddfe-4603-bf9f-6be40a90f4c6","Type":"ContainerDied","Data":"7fa99e95734135dc7f97cf0ac84cc35156f72dda451391f67ced1ae60e628e09"} Jan 30 21:46:17 crc kubenswrapper[4869]: I0130 21:46:17.064089 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fa99e95734135dc7f97cf0ac84cc35156f72dda451391f67ced1ae60e628e09" Jan 30 21:46:17 crc kubenswrapper[4869]: I0130 21:46:17.064158 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:46:17 crc kubenswrapper[4869]: I0130 21:46:17.709091 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-px2s8"] Jan 30 21:46:17 crc kubenswrapper[4869]: I0130 21:46:17.709410 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" podUID="8b2bd278-ed0e-4e4e-90a6-9286bc29a664" containerName="controller-manager" containerID="cri-o://261a5a6598d90803f2d7a66da6d9edcfca0133fcb38e5535d12b9f238deb5ef8" gracePeriod=30 Jan 30 21:46:17 crc kubenswrapper[4869]: I0130 21:46:17.736436 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n"] Jan 30 21:46:17 crc kubenswrapper[4869]: I0130 21:46:17.736744 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" podUID="20c24fb1-6b9c-4688-bb3e-6bc97fce9856" containerName="route-controller-manager" containerID="cri-o://ab41174f167e2e0a914283e6dfe1df9a3a87cfbff56339f682b5022cfedfc582" gracePeriod=30 Jan 30 21:46:20 crc kubenswrapper[4869]: I0130 21:46:20.082663 4869 generic.go:334] "Generic (PLEG): container finished" podID="20c24fb1-6b9c-4688-bb3e-6bc97fce9856" containerID="ab41174f167e2e0a914283e6dfe1df9a3a87cfbff56339f682b5022cfedfc582" exitCode=0 Jan 30 21:46:20 crc kubenswrapper[4869]: I0130 21:46:20.082740 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" event={"ID":"20c24fb1-6b9c-4688-bb3e-6bc97fce9856","Type":"ContainerDied","Data":"ab41174f167e2e0a914283e6dfe1df9a3a87cfbff56339f682b5022cfedfc582"} Jan 30 21:46:20 crc kubenswrapper[4869]: I0130 21:46:20.084653 4869 generic.go:334] "Generic (PLEG): container finished" podID="8b2bd278-ed0e-4e4e-90a6-9286bc29a664" containerID="261a5a6598d90803f2d7a66da6d9edcfca0133fcb38e5535d12b9f238deb5ef8" exitCode=0 Jan 30 21:46:20 crc kubenswrapper[4869]: I0130 21:46:20.084693 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" event={"ID":"8b2bd278-ed0e-4e4e-90a6-9286bc29a664","Type":"ContainerDied","Data":"261a5a6598d90803f2d7a66da6d9edcfca0133fcb38e5535d12b9f238deb5ef8"} Jan 30 21:46:21 crc kubenswrapper[4869]: I0130 21:46:21.661672 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-k4rff" Jan 30 21:46:21 crc kubenswrapper[4869]: I0130 21:46:21.814544 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t4q8n container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 30 21:46:21 crc kubenswrapper[4869]: I0130 21:46:21.814608 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" podUID="20c24fb1-6b9c-4688-bb3e-6bc97fce9856" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 30 21:46:21 crc kubenswrapper[4869]: I0130 21:46:21.969808 4869 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-px2s8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 30 21:46:21 crc kubenswrapper[4869]: I0130 21:46:21.969865 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" podUID="8b2bd278-ed0e-4e4e-90a6-9286bc29a664" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 30 21:46:22 crc kubenswrapper[4869]: I0130 21:46:22.221037 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:46:26 crc kubenswrapper[4869]: I0130 21:46:26.904963 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:46:26 crc kubenswrapper[4869]: I0130 21:46:26.937773 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f"] Jan 30 21:46:26 crc kubenswrapper[4869]: E0130 21:46:26.937998 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2bd278-ed0e-4e4e-90a6-9286bc29a664" containerName="controller-manager" Jan 30 21:46:26 crc kubenswrapper[4869]: I0130 21:46:26.938012 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2bd278-ed0e-4e4e-90a6-9286bc29a664" containerName="controller-manager" Jan 30 21:46:26 crc kubenswrapper[4869]: E0130 21:46:26.938020 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba492aa2-ddfe-4603-bf9f-6be40a90f4c6" containerName="pruner" Jan 30 21:46:26 crc kubenswrapper[4869]: I0130 21:46:26.938027 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba492aa2-ddfe-4603-bf9f-6be40a90f4c6" containerName="pruner" Jan 30 21:46:26 crc kubenswrapper[4869]: I0130 21:46:26.938139 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b2bd278-ed0e-4e4e-90a6-9286bc29a664" containerName="controller-manager" Jan 30 21:46:26 crc kubenswrapper[4869]: I0130 21:46:26.938159 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba492aa2-ddfe-4603-bf9f-6be40a90f4c6" containerName="pruner" Jan 30 21:46:26 crc kubenswrapper[4869]: I0130 21:46:26.938589 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:46:26 crc kubenswrapper[4869]: I0130 21:46:26.946929 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f"] Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.069356 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-proxy-ca-bundles\") pod \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.069439 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-serving-cert\") pod \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.069492 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-client-ca\") pod \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.069546 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgw2t\" (UniqueName: \"kubernetes.io/projected/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-kube-api-access-sgw2t\") pod \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.069578 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-config\") pod \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\" (UID: \"8b2bd278-ed0e-4e4e-90a6-9286bc29a664\") " Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.069732 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f887\" (UniqueName: \"kubernetes.io/projected/4df62410-466a-4c9a-8568-dd563d6b77fd-kube-api-access-7f887\") pod \"controller-manager-7c5f5c5486-mqt4f\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.069759 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4df62410-466a-4c9a-8568-dd563d6b77fd-serving-cert\") pod \"controller-manager-7c5f5c5486-mqt4f\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.069801 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4df62410-466a-4c9a-8568-dd563d6b77fd-proxy-ca-bundles\") pod \"controller-manager-7c5f5c5486-mqt4f\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.069827 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4df62410-466a-4c9a-8568-dd563d6b77fd-client-ca\") pod \"controller-manager-7c5f5c5486-mqt4f\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.069867 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4df62410-466a-4c9a-8568-dd563d6b77fd-config\") pod \"controller-manager-7c5f5c5486-mqt4f\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.070176 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8b2bd278-ed0e-4e4e-90a6-9286bc29a664" (UID: "8b2bd278-ed0e-4e4e-90a6-9286bc29a664"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.071042 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-client-ca" (OuterVolumeSpecName: "client-ca") pod "8b2bd278-ed0e-4e4e-90a6-9286bc29a664" (UID: "8b2bd278-ed0e-4e4e-90a6-9286bc29a664"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.071098 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-config" (OuterVolumeSpecName: "config") pod "8b2bd278-ed0e-4e4e-90a6-9286bc29a664" (UID: "8b2bd278-ed0e-4e4e-90a6-9286bc29a664"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.075563 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8b2bd278-ed0e-4e4e-90a6-9286bc29a664" (UID: "8b2bd278-ed0e-4e4e-90a6-9286bc29a664"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.077252 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-kube-api-access-sgw2t" (OuterVolumeSpecName: "kube-api-access-sgw2t") pod "8b2bd278-ed0e-4e4e-90a6-9286bc29a664" (UID: "8b2bd278-ed0e-4e4e-90a6-9286bc29a664"). InnerVolumeSpecName "kube-api-access-sgw2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.136128 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" event={"ID":"8b2bd278-ed0e-4e4e-90a6-9286bc29a664","Type":"ContainerDied","Data":"e5d3ed060c23d02953304dc995fa37acd5c66638bc04e098e21d100677cda710"} Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.136355 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-px2s8" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.136436 4869 scope.go:117] "RemoveContainer" containerID="261a5a6598d90803f2d7a66da6d9edcfca0133fcb38e5535d12b9f238deb5ef8" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.163369 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-px2s8"] Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.165639 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-px2s8"] Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.171359 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4df62410-466a-4c9a-8568-dd563d6b77fd-proxy-ca-bundles\") pod \"controller-manager-7c5f5c5486-mqt4f\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.171393 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4df62410-466a-4c9a-8568-dd563d6b77fd-client-ca\") pod \"controller-manager-7c5f5c5486-mqt4f\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.171424 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4df62410-466a-4c9a-8568-dd563d6b77fd-config\") pod \"controller-manager-7c5f5c5486-mqt4f\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.171472 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f887\" (UniqueName: \"kubernetes.io/projected/4df62410-466a-4c9a-8568-dd563d6b77fd-kube-api-access-7f887\") pod \"controller-manager-7c5f5c5486-mqt4f\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.171488 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4df62410-466a-4c9a-8568-dd563d6b77fd-serving-cert\") pod \"controller-manager-7c5f5c5486-mqt4f\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.171520 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.171530 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.171539 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgw2t\" (UniqueName: \"kubernetes.io/projected/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-kube-api-access-sgw2t\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.171548 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.171557 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b2bd278-ed0e-4e4e-90a6-9286bc29a664-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.173042 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4df62410-466a-4c9a-8568-dd563d6b77fd-client-ca\") pod \"controller-manager-7c5f5c5486-mqt4f\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.173253 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4df62410-466a-4c9a-8568-dd563d6b77fd-config\") pod \"controller-manager-7c5f5c5486-mqt4f\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.174199 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4df62410-466a-4c9a-8568-dd563d6b77fd-proxy-ca-bundles\") pod \"controller-manager-7c5f5c5486-mqt4f\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.175669 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4df62410-466a-4c9a-8568-dd563d6b77fd-serving-cert\") pod \"controller-manager-7c5f5c5486-mqt4f\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.189890 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f887\" (UniqueName: \"kubernetes.io/projected/4df62410-466a-4c9a-8568-dd563d6b77fd-kube-api-access-7f887\") pod \"controller-manager-7c5f5c5486-mqt4f\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.259408 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.883778 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b2bd278-ed0e-4e4e-90a6-9286bc29a664" path="/var/lib/kubelet/pods/8b2bd278-ed0e-4e4e-90a6-9286bc29a664/volumes" Jan 30 21:46:27 crc kubenswrapper[4869]: I0130 21:46:27.911556 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:46:31 crc kubenswrapper[4869]: I0130 21:46:31.990510 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:46:31 crc kubenswrapper[4869]: I0130 21:46:31.990814 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:46:32 crc kubenswrapper[4869]: I0130 21:46:32.387852 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4pn7q" Jan 30 21:46:32 crc kubenswrapper[4869]: I0130 21:46:32.814150 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t4q8n container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 21:46:32 crc kubenswrapper[4869]: I0130 21:46:32.814275 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" podUID="20c24fb1-6b9c-4688-bb3e-6bc97fce9856" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 21:46:37 crc kubenswrapper[4869]: I0130 21:46:37.680773 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f"] Jan 30 21:46:39 crc kubenswrapper[4869]: I0130 21:46:39.465500 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 21:46:39 crc kubenswrapper[4869]: I0130 21:46:39.470126 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:46:39 crc kubenswrapper[4869]: I0130 21:46:39.471619 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 21:46:39 crc kubenswrapper[4869]: I0130 21:46:39.473648 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 21:46:39 crc kubenswrapper[4869]: I0130 21:46:39.474521 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 21:46:39 crc kubenswrapper[4869]: I0130 21:46:39.640430 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b828b21-57e4-4990-b9dd-184fbfc06736-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3b828b21-57e4-4990-b9dd-184fbfc06736\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:46:39 crc kubenswrapper[4869]: I0130 21:46:39.640487 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b828b21-57e4-4990-b9dd-184fbfc06736-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3b828b21-57e4-4990-b9dd-184fbfc06736\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:46:39 crc kubenswrapper[4869]: I0130 21:46:39.741847 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b828b21-57e4-4990-b9dd-184fbfc06736-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3b828b21-57e4-4990-b9dd-184fbfc06736\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:46:39 crc kubenswrapper[4869]: I0130 21:46:39.742350 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b828b21-57e4-4990-b9dd-184fbfc06736-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3b828b21-57e4-4990-b9dd-184fbfc06736\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:46:39 crc kubenswrapper[4869]: I0130 21:46:39.742453 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b828b21-57e4-4990-b9dd-184fbfc06736-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3b828b21-57e4-4990-b9dd-184fbfc06736\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:46:39 crc kubenswrapper[4869]: I0130 21:46:39.896786 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b828b21-57e4-4990-b9dd-184fbfc06736-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3b828b21-57e4-4990-b9dd-184fbfc06736\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:46:40 crc kubenswrapper[4869]: I0130 21:46:40.090820 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:46:42 crc kubenswrapper[4869]: E0130 21:46:42.262405 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 21:46:42 crc kubenswrapper[4869]: E0130 21:46:42.262570 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7m2t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-b2g6w_openshift-marketplace(a79f57ed-ffe5-4f65-acd5-0bcd42e47a02): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:46:42 crc kubenswrapper[4869]: E0130 21:46:42.263801 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-b2g6w" podUID="a79f57ed-ffe5-4f65-acd5-0bcd42e47a02" Jan 30 21:46:42 crc kubenswrapper[4869]: I0130 21:46:42.813744 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t4q8n container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 21:46:42 crc kubenswrapper[4869]: I0130 21:46:42.814102 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" podUID="20c24fb1-6b9c-4688-bb3e-6bc97fce9856" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 21:46:44 crc kubenswrapper[4869]: I0130 21:46:44.264379 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 21:46:44 crc kubenswrapper[4869]: I0130 21:46:44.265443 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:46:44 crc kubenswrapper[4869]: I0130 21:46:44.286662 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 21:46:44 crc kubenswrapper[4869]: I0130 21:46:44.400721 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4efc27c1-fb04-4be9-88b2-25b7657400b7-kube-api-access\") pod \"installer-9-crc\" (UID: \"4efc27c1-fb04-4be9-88b2-25b7657400b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:46:44 crc kubenswrapper[4869]: I0130 21:46:44.400805 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4efc27c1-fb04-4be9-88b2-25b7657400b7-var-lock\") pod \"installer-9-crc\" (UID: \"4efc27c1-fb04-4be9-88b2-25b7657400b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:46:44 crc kubenswrapper[4869]: I0130 21:46:44.400835 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4efc27c1-fb04-4be9-88b2-25b7657400b7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4efc27c1-fb04-4be9-88b2-25b7657400b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:46:44 crc kubenswrapper[4869]: I0130 21:46:44.502482 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4efc27c1-fb04-4be9-88b2-25b7657400b7-var-lock\") pod \"installer-9-crc\" (UID: \"4efc27c1-fb04-4be9-88b2-25b7657400b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:46:44 crc kubenswrapper[4869]: I0130 21:46:44.502534 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4efc27c1-fb04-4be9-88b2-25b7657400b7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4efc27c1-fb04-4be9-88b2-25b7657400b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:46:44 crc kubenswrapper[4869]: I0130 21:46:44.502592 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4efc27c1-fb04-4be9-88b2-25b7657400b7-kube-api-access\") pod \"installer-9-crc\" (UID: \"4efc27c1-fb04-4be9-88b2-25b7657400b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:46:44 crc kubenswrapper[4869]: I0130 21:46:44.502930 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4efc27c1-fb04-4be9-88b2-25b7657400b7-var-lock\") pod \"installer-9-crc\" (UID: \"4efc27c1-fb04-4be9-88b2-25b7657400b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:46:44 crc kubenswrapper[4869]: I0130 21:46:44.502976 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4efc27c1-fb04-4be9-88b2-25b7657400b7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4efc27c1-fb04-4be9-88b2-25b7657400b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:46:44 crc kubenswrapper[4869]: I0130 21:46:44.520848 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4efc27c1-fb04-4be9-88b2-25b7657400b7-kube-api-access\") pod \"installer-9-crc\" (UID: \"4efc27c1-fb04-4be9-88b2-25b7657400b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:46:44 crc kubenswrapper[4869]: I0130 21:46:44.593248 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:46:45 crc kubenswrapper[4869]: E0130 21:46:45.912217 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 21:46:45 crc kubenswrapper[4869]: E0130 21:46:45.912392 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nfs9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-nlmmn_openshift-marketplace(98ffc112-5fb6-4001-b071-2df7e3d90fd2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:46:45 crc kubenswrapper[4869]: E0130 21:46:45.913590 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-nlmmn" podUID="98ffc112-5fb6-4001-b071-2df7e3d90fd2" Jan 30 21:46:47 crc kubenswrapper[4869]: E0130 21:46:47.162838 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-nlmmn" podUID="98ffc112-5fb6-4001-b071-2df7e3d90fd2" Jan 30 21:46:47 crc kubenswrapper[4869]: E0130 21:46:47.163284 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-b2g6w" podUID="a79f57ed-ffe5-4f65-acd5-0bcd42e47a02" Jan 30 21:46:50 crc kubenswrapper[4869]: E0130 21:46:50.993669 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 21:46:50 crc kubenswrapper[4869]: E0130 21:46:50.995398 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gtn52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-86lcz_openshift-marketplace(72e1dcaa-1805-4157-8fd7-0e00177aaf4c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:46:50 crc kubenswrapper[4869]: E0130 21:46:50.996655 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-86lcz" podUID="72e1dcaa-1805-4157-8fd7-0e00177aaf4c" Jan 30 21:46:52 crc kubenswrapper[4869]: I0130 21:46:52.814442 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t4q8n container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 21:46:52 crc kubenswrapper[4869]: I0130 21:46:52.814810 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" podUID="20c24fb1-6b9c-4688-bb3e-6bc97fce9856" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 21:47:01 crc kubenswrapper[4869]: I0130 21:47:01.990958 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:47:01 crc kubenswrapper[4869]: I0130 21:47:01.991013 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:47:01 crc kubenswrapper[4869]: I0130 21:47:01.991060 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 21:47:01 crc kubenswrapper[4869]: I0130 21:47:01.991716 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2"} pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:47:01 crc kubenswrapper[4869]: I0130 21:47:01.991825 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" containerID="cri-o://30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2" gracePeriod=600 Jan 30 21:47:02 crc kubenswrapper[4869]: I0130 21:47:02.813919 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t4q8n container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 21:47:02 crc kubenswrapper[4869]: I0130 21:47:02.814161 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" podUID="20c24fb1-6b9c-4688-bb3e-6bc97fce9856" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 21:47:04 crc kubenswrapper[4869]: I0130 21:47:04.335418 4869 generic.go:334] "Generic (PLEG): container finished" podID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerID="30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2" exitCode=0 Jan 30 21:47:04 crc kubenswrapper[4869]: I0130 21:47:04.335508 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" event={"ID":"b6fc0664-5e80-440d-a6e8-4189cdf5c500","Type":"ContainerDied","Data":"30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2"} Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.448189 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.458276 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-config\") pod \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\" (UID: \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\") " Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.458646 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-serving-cert\") pod \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\" (UID: \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\") " Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.458673 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-client-ca\") pod \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\" (UID: \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\") " Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.458696 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4ftk\" (UniqueName: \"kubernetes.io/projected/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-kube-api-access-m4ftk\") pod \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\" (UID: \"20c24fb1-6b9c-4688-bb3e-6bc97fce9856\") " Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.459280 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-client-ca" (OuterVolumeSpecName: "client-ca") pod "20c24fb1-6b9c-4688-bb3e-6bc97fce9856" (UID: "20c24fb1-6b9c-4688-bb3e-6bc97fce9856"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.459489 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-config" (OuterVolumeSpecName: "config") pod "20c24fb1-6b9c-4688-bb3e-6bc97fce9856" (UID: "20c24fb1-6b9c-4688-bb3e-6bc97fce9856"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.506770 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-kube-api-access-m4ftk" (OuterVolumeSpecName: "kube-api-access-m4ftk") pod "20c24fb1-6b9c-4688-bb3e-6bc97fce9856" (UID: "20c24fb1-6b9c-4688-bb3e-6bc97fce9856"). InnerVolumeSpecName "kube-api-access-m4ftk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.507796 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "20c24fb1-6b9c-4688-bb3e-6bc97fce9856" (UID: "20c24fb1-6b9c-4688-bb3e-6bc97fce9856"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.516227 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq"] Jan 30 21:47:10 crc kubenswrapper[4869]: E0130 21:47:10.516503 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c24fb1-6b9c-4688-bb3e-6bc97fce9856" containerName="route-controller-manager" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.516523 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c24fb1-6b9c-4688-bb3e-6bc97fce9856" containerName="route-controller-manager" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.516642 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c24fb1-6b9c-4688-bb3e-6bc97fce9856" containerName="route-controller-manager" Jan 30 21:47:10 crc kubenswrapper[4869]: E0130 21:47:10.516747 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 21:47:10 crc kubenswrapper[4869]: E0130 21:47:10.516937 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwcmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rr69r_openshift-marketplace(b394f841-ea61-41c7-9b4b-7ad185073b70): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:47:10 crc kubenswrapper[4869]: E0130 21:47:10.518169 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rr69r" podUID="b394f841-ea61-41c7-9b4b-7ad185073b70" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.520114 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq"] Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.520994 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.559523 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d442182f-dccd-43c0-a24f-e13064ee94b5-config\") pod \"route-controller-manager-6c7d7f8749-dqjfq\" (UID: \"d442182f-dccd-43c0-a24f-e13064ee94b5\") " pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.559598 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d442182f-dccd-43c0-a24f-e13064ee94b5-serving-cert\") pod \"route-controller-manager-6c7d7f8749-dqjfq\" (UID: \"d442182f-dccd-43c0-a24f-e13064ee94b5\") " pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.559656 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d442182f-dccd-43c0-a24f-e13064ee94b5-client-ca\") pod \"route-controller-manager-6c7d7f8749-dqjfq\" (UID: \"d442182f-dccd-43c0-a24f-e13064ee94b5\") " pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.559682 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flx6c\" (UniqueName: \"kubernetes.io/projected/d442182f-dccd-43c0-a24f-e13064ee94b5-kube-api-access-flx6c\") pod \"route-controller-manager-6c7d7f8749-dqjfq\" (UID: \"d442182f-dccd-43c0-a24f-e13064ee94b5\") " pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.559760 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.559779 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.559790 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.559801 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4ftk\" (UniqueName: \"kubernetes.io/projected/20c24fb1-6b9c-4688-bb3e-6bc97fce9856-kube-api-access-m4ftk\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.661108 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d442182f-dccd-43c0-a24f-e13064ee94b5-serving-cert\") pod \"route-controller-manager-6c7d7f8749-dqjfq\" (UID: \"d442182f-dccd-43c0-a24f-e13064ee94b5\") " pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.661192 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d442182f-dccd-43c0-a24f-e13064ee94b5-client-ca\") pod \"route-controller-manager-6c7d7f8749-dqjfq\" (UID: \"d442182f-dccd-43c0-a24f-e13064ee94b5\") " pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.661219 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flx6c\" (UniqueName: \"kubernetes.io/projected/d442182f-dccd-43c0-a24f-e13064ee94b5-kube-api-access-flx6c\") pod \"route-controller-manager-6c7d7f8749-dqjfq\" (UID: \"d442182f-dccd-43c0-a24f-e13064ee94b5\") " pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.661286 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d442182f-dccd-43c0-a24f-e13064ee94b5-config\") pod \"route-controller-manager-6c7d7f8749-dqjfq\" (UID: \"d442182f-dccd-43c0-a24f-e13064ee94b5\") " pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.662378 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d442182f-dccd-43c0-a24f-e13064ee94b5-client-ca\") pod \"route-controller-manager-6c7d7f8749-dqjfq\" (UID: \"d442182f-dccd-43c0-a24f-e13064ee94b5\") " pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.662554 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d442182f-dccd-43c0-a24f-e13064ee94b5-config\") pod \"route-controller-manager-6c7d7f8749-dqjfq\" (UID: \"d442182f-dccd-43c0-a24f-e13064ee94b5\") " pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.664729 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d442182f-dccd-43c0-a24f-e13064ee94b5-serving-cert\") pod \"route-controller-manager-6c7d7f8749-dqjfq\" (UID: \"d442182f-dccd-43c0-a24f-e13064ee94b5\") " pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.677211 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flx6c\" (UniqueName: \"kubernetes.io/projected/d442182f-dccd-43c0-a24f-e13064ee94b5-kube-api-access-flx6c\") pod \"route-controller-manager-6c7d7f8749-dqjfq\" (UID: \"d442182f-dccd-43c0-a24f-e13064ee94b5\") " pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" Jan 30 21:47:10 crc kubenswrapper[4869]: I0130 21:47:10.855073 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" Jan 30 21:47:11 crc kubenswrapper[4869]: I0130 21:47:11.372936 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" Jan 30 21:47:11 crc kubenswrapper[4869]: I0130 21:47:11.372995 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n" event={"ID":"20c24fb1-6b9c-4688-bb3e-6bc97fce9856","Type":"ContainerDied","Data":"58d761f408300824efd753f1f0a2e3ffe70da4b4848ba930ae8515d6ea7a580b"} Jan 30 21:47:11 crc kubenswrapper[4869]: I0130 21:47:11.433559 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n"] Jan 30 21:47:11 crc kubenswrapper[4869]: I0130 21:47:11.436542 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4q8n"] Jan 30 21:47:11 crc kubenswrapper[4869]: E0130 21:47:11.775483 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rr69r" podUID="b394f841-ea61-41c7-9b4b-7ad185073b70" Jan 30 21:47:11 crc kubenswrapper[4869]: I0130 21:47:11.794520 4869 scope.go:117] "RemoveContainer" containerID="ab41174f167e2e0a914283e6dfe1df9a3a87cfbff56339f682b5022cfedfc582" Jan 30 21:47:11 crc kubenswrapper[4869]: E0130 21:47:11.867433 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 21:47:11 crc kubenswrapper[4869]: E0130 21:47:11.867600 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2q9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6v7j4_openshift-marketplace(4ebfcb0e-58a5-4ab1-894f-1a6093921531): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:47:11 crc kubenswrapper[4869]: E0130 21:47:11.868815 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6v7j4" podUID="4ebfcb0e-58a5-4ab1-894f-1a6093921531" Jan 30 21:47:11 crc kubenswrapper[4869]: E0130 21:47:11.887653 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 21:47:11 crc kubenswrapper[4869]: E0130 21:47:11.887851 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lldpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jkgvk_openshift-marketplace(2f93f920-68d0-41d1-8a20-ca174eda2fcd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:47:11 crc kubenswrapper[4869]: E0130 21:47:11.889700 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-jkgvk" podUID="2f93f920-68d0-41d1-8a20-ca174eda2fcd" Jan 30 21:47:11 crc kubenswrapper[4869]: I0130 21:47:11.906389 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20c24fb1-6b9c-4688-bb3e-6bc97fce9856" path="/var/lib/kubelet/pods/20c24fb1-6b9c-4688-bb3e-6bc97fce9856/volumes" Jan 30 21:47:11 crc kubenswrapper[4869]: E0130 21:47:11.994178 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 21:47:11 crc kubenswrapper[4869]: E0130 21:47:11.994306 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdk4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-w5fzr_openshift-marketplace(85baedd6-e513-4741-98cf-ef39cfda8ecb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:47:11 crc kubenswrapper[4869]: E0130 21:47:11.995506 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-w5fzr" podUID="85baedd6-e513-4741-98cf-ef39cfda8ecb" Jan 30 21:47:12 crc kubenswrapper[4869]: I0130 21:47:12.309768 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 21:47:12 crc kubenswrapper[4869]: I0130 21:47:12.314086 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f"] Jan 30 21:47:12 crc kubenswrapper[4869]: I0130 21:47:12.383026 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsp88" event={"ID":"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3","Type":"ContainerStarted","Data":"5e248ca8eda44eeaec38c679eb2ad84bb6907c68976fe46ee3408b724d83f75c"} Jan 30 21:47:12 crc kubenswrapper[4869]: I0130 21:47:12.386761 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" event={"ID":"b6fc0664-5e80-440d-a6e8-4189cdf5c500","Type":"ContainerStarted","Data":"973a5ef833744fae8722d2d7d547e46f64e5f09ddd2aedbd9671f0d4496e56c1"} Jan 30 21:47:12 crc kubenswrapper[4869]: I0130 21:47:12.404018 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 21:47:12 crc kubenswrapper[4869]: I0130 21:47:12.406924 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq"] Jan 30 21:47:12 crc kubenswrapper[4869]: E0130 21:47:12.654285 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6v7j4" podUID="4ebfcb0e-58a5-4ab1-894f-1a6093921531" Jan 30 21:47:12 crc kubenswrapper[4869]: E0130 21:47:12.654605 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jkgvk" podUID="2f93f920-68d0-41d1-8a20-ca174eda2fcd" Jan 30 21:47:12 crc kubenswrapper[4869]: E0130 21:47:12.691085 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-w5fzr" podUID="85baedd6-e513-4741-98cf-ef39cfda8ecb" Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.394548 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3b828b21-57e4-4990-b9dd-184fbfc06736","Type":"ContainerStarted","Data":"a20bf57a8721da905e9a65fb7224e0277226e2dc5360967d1479d7d1ea0c785e"} Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.394988 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3b828b21-57e4-4990-b9dd-184fbfc06736","Type":"ContainerStarted","Data":"ef933f740b07576f4a1fcfffe0c159a8a82988b9e0b3bbc0a193e393707839d6"} Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.397049 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4efc27c1-fb04-4be9-88b2-25b7657400b7","Type":"ContainerStarted","Data":"d3f014043a3d991391363cd5b6ac8a8ae6e4017676fdb7f06c79c55e96db5b22"} Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.397084 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4efc27c1-fb04-4be9-88b2-25b7657400b7","Type":"ContainerStarted","Data":"4eecf7eb539d0e415c9e6ae80b0c01ea992e59fcb6673cd772da1276aced799a"} Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.399245 4869 generic.go:334] "Generic (PLEG): container finished" podID="72e1dcaa-1805-4157-8fd7-0e00177aaf4c" containerID="d36db8a5338d5ea2231a6e02d029244e917d10976073eda5f183627eea086bbe" exitCode=0 Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.399365 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-86lcz" event={"ID":"72e1dcaa-1805-4157-8fd7-0e00177aaf4c","Type":"ContainerDied","Data":"d36db8a5338d5ea2231a6e02d029244e917d10976073eda5f183627eea086bbe"} Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.403467 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" event={"ID":"4df62410-466a-4c9a-8568-dd563d6b77fd","Type":"ContainerStarted","Data":"995bbf25b2562bea04cc1fec17391bfe92ba07196f8830c2ee162a63dc16c4b2"} Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.403509 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" event={"ID":"4df62410-466a-4c9a-8568-dd563d6b77fd","Type":"ContainerStarted","Data":"a806cb4f2608ea24e1ea9472afab3e0d30aaddcbcc1e7c334c76bfe40a6d44c4"} Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.403604 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" podUID="4df62410-466a-4c9a-8568-dd563d6b77fd" containerName="controller-manager" containerID="cri-o://995bbf25b2562bea04cc1fec17391bfe92ba07196f8830c2ee162a63dc16c4b2" gracePeriod=30 Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.404027 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.406288 4869 generic.go:334] "Generic (PLEG): container finished" podID="98ffc112-5fb6-4001-b071-2df7e3d90fd2" containerID="4b6a2ff9175c3c22c3084804fbdb80b96c1e0f08a8317534e3ff0471c2070a84" exitCode=0 Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.406366 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlmmn" event={"ID":"98ffc112-5fb6-4001-b071-2df7e3d90fd2","Type":"ContainerDied","Data":"4b6a2ff9175c3c22c3084804fbdb80b96c1e0f08a8317534e3ff0471c2070a84"} Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.413202 4869 generic.go:334] "Generic (PLEG): container finished" podID="a79f57ed-ffe5-4f65-acd5-0bcd42e47a02" containerID="7b7cc6d9ef18417e30d7694b50fd9d1c24239efd81fa3699f2bc8008c3495b4b" exitCode=0 Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.413279 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2g6w" event={"ID":"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02","Type":"ContainerDied","Data":"7b7cc6d9ef18417e30d7694b50fd9d1c24239efd81fa3699f2bc8008c3495b4b"} Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.414277 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=34.414258614 podStartE2EDuration="34.414258614s" podCreationTimestamp="2026-01-30 21:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:47:13.412492657 +0000 UTC m=+234.298250682" watchObservedRunningTime="2026-01-30 21:47:13.414258614 +0000 UTC m=+234.300016649" Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.420697 4869 generic.go:334] "Generic (PLEG): container finished" podID="0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3" containerID="5e248ca8eda44eeaec38c679eb2ad84bb6907c68976fe46ee3408b724d83f75c" exitCode=0 Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.420782 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsp88" event={"ID":"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3","Type":"ContainerDied","Data":"5e248ca8eda44eeaec38c679eb2ad84bb6907c68976fe46ee3408b724d83f75c"} Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.423290 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" event={"ID":"d442182f-dccd-43c0-a24f-e13064ee94b5","Type":"ContainerStarted","Data":"bed447bd830f6a7ae52d45b3540241619893ebbe0b37b015c18dd5e750f1230c"} Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.423330 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.423345 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" event={"ID":"d442182f-dccd-43c0-a24f-e13064ee94b5","Type":"ContainerStarted","Data":"866b8b24af4f81067dad9c5ebf034f902139b1daf313d3b55068faab4afdd1a1"} Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.426090 4869 patch_prober.go:28] interesting pod/controller-manager-7c5f5c5486-mqt4f container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:37030->10.217.0.54:8443: read: connection reset by peer" start-of-body= Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.426134 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" podUID="4df62410-466a-4c9a-8568-dd563d6b77fd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:37030->10.217.0.54:8443: read: connection reset by peer" Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.449524 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.454998 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" podStartSLOduration=56.454980531 podStartE2EDuration="56.454980531s" podCreationTimestamp="2026-01-30 21:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:47:13.453940438 +0000 UTC m=+234.339698753" watchObservedRunningTime="2026-01-30 21:47:13.454980531 +0000 UTC m=+234.340738556" Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.491845 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=29.491829816 podStartE2EDuration="29.491829816s" podCreationTimestamp="2026-01-30 21:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:47:13.490667409 +0000 UTC m=+234.376425434" watchObservedRunningTime="2026-01-30 21:47:13.491829816 +0000 UTC m=+234.377587841" Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.514457 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" podStartSLOduration=36.514436717 podStartE2EDuration="36.514436717s" podCreationTimestamp="2026-01-30 21:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:47:13.511546935 +0000 UTC m=+234.397304960" watchObservedRunningTime="2026-01-30 21:47:13.514436717 +0000 UTC m=+234.400194742" Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.847143 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.919534 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4df62410-466a-4c9a-8568-dd563d6b77fd-serving-cert\") pod \"4df62410-466a-4c9a-8568-dd563d6b77fd\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.919582 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4df62410-466a-4c9a-8568-dd563d6b77fd-proxy-ca-bundles\") pod \"4df62410-466a-4c9a-8568-dd563d6b77fd\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.919608 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4df62410-466a-4c9a-8568-dd563d6b77fd-config\") pod \"4df62410-466a-4c9a-8568-dd563d6b77fd\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.919749 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4df62410-466a-4c9a-8568-dd563d6b77fd-client-ca\") pod \"4df62410-466a-4c9a-8568-dd563d6b77fd\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.919808 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f887\" (UniqueName: \"kubernetes.io/projected/4df62410-466a-4c9a-8568-dd563d6b77fd-kube-api-access-7f887\") pod \"4df62410-466a-4c9a-8568-dd563d6b77fd\" (UID: \"4df62410-466a-4c9a-8568-dd563d6b77fd\") " Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.920773 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4df62410-466a-4c9a-8568-dd563d6b77fd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4df62410-466a-4c9a-8568-dd563d6b77fd" (UID: "4df62410-466a-4c9a-8568-dd563d6b77fd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.920858 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4df62410-466a-4c9a-8568-dd563d6b77fd-client-ca" (OuterVolumeSpecName: "client-ca") pod "4df62410-466a-4c9a-8568-dd563d6b77fd" (UID: "4df62410-466a-4c9a-8568-dd563d6b77fd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.921028 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4df62410-466a-4c9a-8568-dd563d6b77fd-config" (OuterVolumeSpecName: "config") pod "4df62410-466a-4c9a-8568-dd563d6b77fd" (UID: "4df62410-466a-4c9a-8568-dd563d6b77fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.925127 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df62410-466a-4c9a-8568-dd563d6b77fd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4df62410-466a-4c9a-8568-dd563d6b77fd" (UID: "4df62410-466a-4c9a-8568-dd563d6b77fd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.928466 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4df62410-466a-4c9a-8568-dd563d6b77fd-kube-api-access-7f887" (OuterVolumeSpecName: "kube-api-access-7f887") pod "4df62410-466a-4c9a-8568-dd563d6b77fd" (UID: "4df62410-466a-4c9a-8568-dd563d6b77fd"). InnerVolumeSpecName "kube-api-access-7f887". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.942593 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76496656b7-57rfw"] Jan 30 21:47:13 crc kubenswrapper[4869]: E0130 21:47:13.942997 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df62410-466a-4c9a-8568-dd563d6b77fd" containerName="controller-manager" Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.943021 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df62410-466a-4c9a-8568-dd563d6b77fd" containerName="controller-manager" Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.943186 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4df62410-466a-4c9a-8568-dd563d6b77fd" containerName="controller-manager" Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.943719 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76496656b7-57rfw"] Jan 30 21:47:13 crc kubenswrapper[4869]: I0130 21:47:13.943836 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.020561 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/edff57f9-27da-47e1-9d1b-174d22fcced3-proxy-ca-bundles\") pod \"controller-manager-76496656b7-57rfw\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.021754 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edff57f9-27da-47e1-9d1b-174d22fcced3-client-ca\") pod \"controller-manager-76496656b7-57rfw\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.021877 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77g7s\" (UniqueName: \"kubernetes.io/projected/edff57f9-27da-47e1-9d1b-174d22fcced3-kube-api-access-77g7s\") pod \"controller-manager-76496656b7-57rfw\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.022152 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edff57f9-27da-47e1-9d1b-174d22fcced3-config\") pod \"controller-manager-76496656b7-57rfw\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.022456 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edff57f9-27da-47e1-9d1b-174d22fcced3-serving-cert\") pod \"controller-manager-76496656b7-57rfw\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.022558 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4df62410-466a-4c9a-8568-dd563d6b77fd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.022578 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4df62410-466a-4c9a-8568-dd563d6b77fd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.022589 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4df62410-466a-4c9a-8568-dd563d6b77fd-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.022618 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4df62410-466a-4c9a-8568-dd563d6b77fd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.022630 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f887\" (UniqueName: \"kubernetes.io/projected/4df62410-466a-4c9a-8568-dd563d6b77fd-kube-api-access-7f887\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.124313 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/edff57f9-27da-47e1-9d1b-174d22fcced3-proxy-ca-bundles\") pod \"controller-manager-76496656b7-57rfw\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.124370 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edff57f9-27da-47e1-9d1b-174d22fcced3-client-ca\") pod \"controller-manager-76496656b7-57rfw\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.124389 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77g7s\" (UniqueName: \"kubernetes.io/projected/edff57f9-27da-47e1-9d1b-174d22fcced3-kube-api-access-77g7s\") pod \"controller-manager-76496656b7-57rfw\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.124418 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edff57f9-27da-47e1-9d1b-174d22fcced3-config\") pod \"controller-manager-76496656b7-57rfw\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.124472 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edff57f9-27da-47e1-9d1b-174d22fcced3-serving-cert\") pod \"controller-manager-76496656b7-57rfw\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.125970 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/edff57f9-27da-47e1-9d1b-174d22fcced3-proxy-ca-bundles\") pod \"controller-manager-76496656b7-57rfw\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.125983 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edff57f9-27da-47e1-9d1b-174d22fcced3-client-ca\") pod \"controller-manager-76496656b7-57rfw\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.126335 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edff57f9-27da-47e1-9d1b-174d22fcced3-config\") pod \"controller-manager-76496656b7-57rfw\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.129339 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edff57f9-27da-47e1-9d1b-174d22fcced3-serving-cert\") pod \"controller-manager-76496656b7-57rfw\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.140365 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77g7s\" (UniqueName: \"kubernetes.io/projected/edff57f9-27da-47e1-9d1b-174d22fcced3-kube-api-access-77g7s\") pod \"controller-manager-76496656b7-57rfw\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.257850 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.435064 4869 generic.go:334] "Generic (PLEG): container finished" podID="4df62410-466a-4c9a-8568-dd563d6b77fd" containerID="995bbf25b2562bea04cc1fec17391bfe92ba07196f8830c2ee162a63dc16c4b2" exitCode=0 Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.435446 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" event={"ID":"4df62410-466a-4c9a-8568-dd563d6b77fd","Type":"ContainerDied","Data":"995bbf25b2562bea04cc1fec17391bfe92ba07196f8830c2ee162a63dc16c4b2"} Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.435475 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" event={"ID":"4df62410-466a-4c9a-8568-dd563d6b77fd","Type":"ContainerDied","Data":"a806cb4f2608ea24e1ea9472afab3e0d30aaddcbcc1e7c334c76bfe40a6d44c4"} Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.435492 4869 scope.go:117] "RemoveContainer" containerID="995bbf25b2562bea04cc1fec17391bfe92ba07196f8830c2ee162a63dc16c4b2" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.435596 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.444577 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2g6w" event={"ID":"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02","Type":"ContainerStarted","Data":"bdfe12b8ada59ecac25ece6b4064951aa03f2c9eae06860f9d608933bbe12a22"} Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.454563 4869 generic.go:334] "Generic (PLEG): container finished" podID="3b828b21-57e4-4990-b9dd-184fbfc06736" containerID="a20bf57a8721da905e9a65fb7224e0277226e2dc5360967d1479d7d1ea0c785e" exitCode=0 Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.455221 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3b828b21-57e4-4990-b9dd-184fbfc06736","Type":"ContainerDied","Data":"a20bf57a8721da905e9a65fb7224e0277226e2dc5360967d1479d7d1ea0c785e"} Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.458532 4869 scope.go:117] "RemoveContainer" containerID="995bbf25b2562bea04cc1fec17391bfe92ba07196f8830c2ee162a63dc16c4b2" Jan 30 21:47:14 crc kubenswrapper[4869]: E0130 21:47:14.468776 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"995bbf25b2562bea04cc1fec17391bfe92ba07196f8830c2ee162a63dc16c4b2\": container with ID starting with 995bbf25b2562bea04cc1fec17391bfe92ba07196f8830c2ee162a63dc16c4b2 not found: ID does not exist" containerID="995bbf25b2562bea04cc1fec17391bfe92ba07196f8830c2ee162a63dc16c4b2" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.468827 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995bbf25b2562bea04cc1fec17391bfe92ba07196f8830c2ee162a63dc16c4b2"} err="failed to get container status \"995bbf25b2562bea04cc1fec17391bfe92ba07196f8830c2ee162a63dc16c4b2\": rpc error: code = NotFound desc = could not find container \"995bbf25b2562bea04cc1fec17391bfe92ba07196f8830c2ee162a63dc16c4b2\": container with ID starting with 995bbf25b2562bea04cc1fec17391bfe92ba07196f8830c2ee162a63dc16c4b2 not found: ID does not exist" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.486367 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b2g6w" podStartSLOduration=3.985676302 podStartE2EDuration="1m16.486345679s" podCreationTimestamp="2026-01-30 21:45:58 +0000 UTC" firstStartedPulling="2026-01-30 21:46:01.547017778 +0000 UTC m=+162.432775813" lastFinishedPulling="2026-01-30 21:47:14.047687165 +0000 UTC m=+234.933445190" observedRunningTime="2026-01-30 21:47:14.466740644 +0000 UTC m=+235.352498669" watchObservedRunningTime="2026-01-30 21:47:14.486345679 +0000 UTC m=+235.372103704" Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.499500 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f"] Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.502257 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c5f5c5486-mqt4f"] Jan 30 21:47:14 crc kubenswrapper[4869]: I0130 21:47:14.642746 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76496656b7-57rfw"] Jan 30 21:47:14 crc kubenswrapper[4869]: W0130 21:47:14.648942 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedff57f9_27da_47e1_9d1b_174d22fcced3.slice/crio-8c7b5468025e5f5554b34f649e4fe9bfd17fa0c1950d5cd4d824456e19f48caf WatchSource:0}: Error finding container 8c7b5468025e5f5554b34f649e4fe9bfd17fa0c1950d5cd4d824456e19f48caf: Status 404 returned error can't find the container with id 8c7b5468025e5f5554b34f649e4fe9bfd17fa0c1950d5cd4d824456e19f48caf Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.462877 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-86lcz" event={"ID":"72e1dcaa-1805-4157-8fd7-0e00177aaf4c","Type":"ContainerStarted","Data":"c417b8f5398446fceb87e32a8db235e01d61a57bec029c2dbb7f91950e7164d7"} Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.465723 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlmmn" event={"ID":"98ffc112-5fb6-4001-b071-2df7e3d90fd2","Type":"ContainerStarted","Data":"fd6d274c4833a79ec1faec72865e062649ba0ff9fa445eedee124f379e2de9f3"} Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.468391 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsp88" event={"ID":"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3","Type":"ContainerStarted","Data":"b40787daa337627e594426c009ef5978b05485802914ed1a3478a4acddd9e720"} Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.469795 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" event={"ID":"edff57f9-27da-47e1-9d1b-174d22fcced3","Type":"ContainerStarted","Data":"041a4356f58c689465a3dd80163d1a47ce8c5e45a1bf4e221ee395ca77abb329"} Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.469832 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" event={"ID":"edff57f9-27da-47e1-9d1b-174d22fcced3","Type":"ContainerStarted","Data":"8c7b5468025e5f5554b34f649e4fe9bfd17fa0c1950d5cd4d824456e19f48caf"} Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.470035 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.476081 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.486999 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-86lcz" podStartSLOduration=2.267951913 podStartE2EDuration="1m14.486981785s" podCreationTimestamp="2026-01-30 21:46:01 +0000 UTC" firstStartedPulling="2026-01-30 21:46:02.680203578 +0000 UTC m=+163.565961603" lastFinishedPulling="2026-01-30 21:47:14.89923345 +0000 UTC m=+235.784991475" observedRunningTime="2026-01-30 21:47:15.48400213 +0000 UTC m=+236.369760175" watchObservedRunningTime="2026-01-30 21:47:15.486981785 +0000 UTC m=+236.372739810" Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.505639 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" podStartSLOduration=38.50561755 podStartE2EDuration="38.50561755s" podCreationTimestamp="2026-01-30 21:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:47:15.503629786 +0000 UTC m=+236.389387831" watchObservedRunningTime="2026-01-30 21:47:15.50561755 +0000 UTC m=+236.391375565" Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.534208 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tsp88" podStartSLOduration=2.426240942 podStartE2EDuration="1m13.53418669s" podCreationTimestamp="2026-01-30 21:46:02 +0000 UTC" firstStartedPulling="2026-01-30 21:46:03.703436722 +0000 UTC m=+164.589194747" lastFinishedPulling="2026-01-30 21:47:14.81138248 +0000 UTC m=+235.697140495" observedRunningTime="2026-01-30 21:47:15.527227228 +0000 UTC m=+236.412985273" watchObservedRunningTime="2026-01-30 21:47:15.53418669 +0000 UTC m=+236.419944715" Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.730465 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.742229 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nlmmn" podStartSLOduration=3.401579066 podStartE2EDuration="1m16.742210771s" podCreationTimestamp="2026-01-30 21:45:59 +0000 UTC" firstStartedPulling="2026-01-30 21:46:01.608451344 +0000 UTC m=+162.494209369" lastFinishedPulling="2026-01-30 21:47:14.949083049 +0000 UTC m=+235.834841074" observedRunningTime="2026-01-30 21:47:15.554410154 +0000 UTC m=+236.440168189" watchObservedRunningTime="2026-01-30 21:47:15.742210771 +0000 UTC m=+236.627968796" Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.749592 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b828b21-57e4-4990-b9dd-184fbfc06736-kubelet-dir\") pod \"3b828b21-57e4-4990-b9dd-184fbfc06736\" (UID: \"3b828b21-57e4-4990-b9dd-184fbfc06736\") " Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.749673 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b828b21-57e4-4990-b9dd-184fbfc06736-kube-api-access\") pod \"3b828b21-57e4-4990-b9dd-184fbfc06736\" (UID: \"3b828b21-57e4-4990-b9dd-184fbfc06736\") " Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.749706 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b828b21-57e4-4990-b9dd-184fbfc06736-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3b828b21-57e4-4990-b9dd-184fbfc06736" (UID: "3b828b21-57e4-4990-b9dd-184fbfc06736"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.750069 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b828b21-57e4-4990-b9dd-184fbfc06736-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.756121 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b828b21-57e4-4990-b9dd-184fbfc06736-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3b828b21-57e4-4990-b9dd-184fbfc06736" (UID: "3b828b21-57e4-4990-b9dd-184fbfc06736"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.851588 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b828b21-57e4-4990-b9dd-184fbfc06736-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:15 crc kubenswrapper[4869]: I0130 21:47:15.890873 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4df62410-466a-4c9a-8568-dd563d6b77fd" path="/var/lib/kubelet/pods/4df62410-466a-4c9a-8568-dd563d6b77fd/volumes" Jan 30 21:47:16 crc kubenswrapper[4869]: I0130 21:47:16.476773 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3b828b21-57e4-4990-b9dd-184fbfc06736","Type":"ContainerDied","Data":"ef933f740b07576f4a1fcfffe0c159a8a82988b9e0b3bbc0a193e393707839d6"} Jan 30 21:47:16 crc kubenswrapper[4869]: I0130 21:47:16.477152 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef933f740b07576f4a1fcfffe0c159a8a82988b9e0b3bbc0a193e393707839d6" Jan 30 21:47:16 crc kubenswrapper[4869]: I0130 21:47:16.477122 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:47:19 crc kubenswrapper[4869]: I0130 21:47:19.158321 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b2g6w" Jan 30 21:47:19 crc kubenswrapper[4869]: I0130 21:47:19.158931 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b2g6w" Jan 30 21:47:19 crc kubenswrapper[4869]: I0130 21:47:19.561025 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nlmmn" Jan 30 21:47:19 crc kubenswrapper[4869]: I0130 21:47:19.561098 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nlmmn" Jan 30 21:47:19 crc kubenswrapper[4869]: I0130 21:47:19.686170 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b2g6w" Jan 30 21:47:19 crc kubenswrapper[4869]: I0130 21:47:19.686749 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nlmmn" Jan 30 21:47:19 crc kubenswrapper[4869]: I0130 21:47:19.729557 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b2g6w" Jan 30 21:47:20 crc kubenswrapper[4869]: I0130 21:47:20.542740 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nlmmn" Jan 30 21:47:21 crc kubenswrapper[4869]: I0130 21:47:21.534144 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-86lcz" Jan 30 21:47:21 crc kubenswrapper[4869]: I0130 21:47:21.534451 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-86lcz" Jan 30 21:47:21 crc kubenswrapper[4869]: I0130 21:47:21.609613 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-86lcz" Jan 30 21:47:21 crc kubenswrapper[4869]: I0130 21:47:21.622630 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nlmmn"] Jan 30 21:47:22 crc kubenswrapper[4869]: I0130 21:47:22.504310 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nlmmn" podUID="98ffc112-5fb6-4001-b071-2df7e3d90fd2" containerName="registry-server" containerID="cri-o://fd6d274c4833a79ec1faec72865e062649ba0ff9fa445eedee124f379e2de9f3" gracePeriod=2 Jan 30 21:47:22 crc kubenswrapper[4869]: I0130 21:47:22.527212 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tsp88" Jan 30 21:47:22 crc kubenswrapper[4869]: I0130 21:47:22.527272 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tsp88" Jan 30 21:47:22 crc kubenswrapper[4869]: I0130 21:47:22.542479 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-86lcz" Jan 30 21:47:22 crc kubenswrapper[4869]: I0130 21:47:22.573305 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tsp88" Jan 30 21:47:23 crc kubenswrapper[4869]: I0130 21:47:23.511683 4869 generic.go:334] "Generic (PLEG): container finished" podID="98ffc112-5fb6-4001-b071-2df7e3d90fd2" containerID="fd6d274c4833a79ec1faec72865e062649ba0ff9fa445eedee124f379e2de9f3" exitCode=0 Jan 30 21:47:23 crc kubenswrapper[4869]: I0130 21:47:23.511761 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlmmn" event={"ID":"98ffc112-5fb6-4001-b071-2df7e3d90fd2","Type":"ContainerDied","Data":"fd6d274c4833a79ec1faec72865e062649ba0ff9fa445eedee124f379e2de9f3"} Jan 30 21:47:23 crc kubenswrapper[4869]: I0130 21:47:23.552782 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tsp88" Jan 30 21:47:23 crc kubenswrapper[4869]: I0130 21:47:23.827207 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlmmn" Jan 30 21:47:23 crc kubenswrapper[4869]: I0130 21:47:23.856925 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98ffc112-5fb6-4001-b071-2df7e3d90fd2-utilities\") pod \"98ffc112-5fb6-4001-b071-2df7e3d90fd2\" (UID: \"98ffc112-5fb6-4001-b071-2df7e3d90fd2\") " Jan 30 21:47:23 crc kubenswrapper[4869]: I0130 21:47:23.856990 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfs9c\" (UniqueName: \"kubernetes.io/projected/98ffc112-5fb6-4001-b071-2df7e3d90fd2-kube-api-access-nfs9c\") pod \"98ffc112-5fb6-4001-b071-2df7e3d90fd2\" (UID: \"98ffc112-5fb6-4001-b071-2df7e3d90fd2\") " Jan 30 21:47:23 crc kubenswrapper[4869]: I0130 21:47:23.857071 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98ffc112-5fb6-4001-b071-2df7e3d90fd2-catalog-content\") pod \"98ffc112-5fb6-4001-b071-2df7e3d90fd2\" (UID: \"98ffc112-5fb6-4001-b071-2df7e3d90fd2\") " Jan 30 21:47:23 crc kubenswrapper[4869]: I0130 21:47:23.857960 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98ffc112-5fb6-4001-b071-2df7e3d90fd2-utilities" (OuterVolumeSpecName: "utilities") pod "98ffc112-5fb6-4001-b071-2df7e3d90fd2" (UID: "98ffc112-5fb6-4001-b071-2df7e3d90fd2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:47:23 crc kubenswrapper[4869]: I0130 21:47:23.862516 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98ffc112-5fb6-4001-b071-2df7e3d90fd2-kube-api-access-nfs9c" (OuterVolumeSpecName: "kube-api-access-nfs9c") pod "98ffc112-5fb6-4001-b071-2df7e3d90fd2" (UID: "98ffc112-5fb6-4001-b071-2df7e3d90fd2"). InnerVolumeSpecName "kube-api-access-nfs9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:47:23 crc kubenswrapper[4869]: I0130 21:47:23.917850 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98ffc112-5fb6-4001-b071-2df7e3d90fd2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98ffc112-5fb6-4001-b071-2df7e3d90fd2" (UID: "98ffc112-5fb6-4001-b071-2df7e3d90fd2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:47:23 crc kubenswrapper[4869]: I0130 21:47:23.958865 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98ffc112-5fb6-4001-b071-2df7e3d90fd2-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:23 crc kubenswrapper[4869]: I0130 21:47:23.958931 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfs9c\" (UniqueName: \"kubernetes.io/projected/98ffc112-5fb6-4001-b071-2df7e3d90fd2-kube-api-access-nfs9c\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:23 crc kubenswrapper[4869]: I0130 21:47:23.958949 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98ffc112-5fb6-4001-b071-2df7e3d90fd2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:24 crc kubenswrapper[4869]: I0130 21:47:24.022646 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-86lcz"] Jan 30 21:47:24 crc kubenswrapper[4869]: I0130 21:47:24.519769 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlmmn" event={"ID":"98ffc112-5fb6-4001-b071-2df7e3d90fd2","Type":"ContainerDied","Data":"6de9e15db559cbbe174523a5489db956fa0e54b42e208953c6fe31da878ea49a"} Jan 30 21:47:24 crc kubenswrapper[4869]: I0130 21:47:24.520116 4869 scope.go:117] "RemoveContainer" containerID="fd6d274c4833a79ec1faec72865e062649ba0ff9fa445eedee124f379e2de9f3" Jan 30 21:47:24 crc kubenswrapper[4869]: I0130 21:47:24.519831 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlmmn" Jan 30 21:47:24 crc kubenswrapper[4869]: I0130 21:47:24.519953 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-86lcz" podUID="72e1dcaa-1805-4157-8fd7-0e00177aaf4c" containerName="registry-server" containerID="cri-o://c417b8f5398446fceb87e32a8db235e01d61a57bec029c2dbb7f91950e7164d7" gracePeriod=2 Jan 30 21:47:24 crc kubenswrapper[4869]: I0130 21:47:24.543278 4869 scope.go:117] "RemoveContainer" containerID="4b6a2ff9175c3c22c3084804fbdb80b96c1e0f08a8317534e3ff0471c2070a84" Jan 30 21:47:24 crc kubenswrapper[4869]: I0130 21:47:24.551004 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nlmmn"] Jan 30 21:47:24 crc kubenswrapper[4869]: I0130 21:47:24.553039 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nlmmn"] Jan 30 21:47:24 crc kubenswrapper[4869]: I0130 21:47:24.562790 4869 scope.go:117] "RemoveContainer" containerID="932b5329c5999ef2d1f4256d38f55de123ae7b1a5cc8d83b06afacffd52808a4" Jan 30 21:47:25 crc kubenswrapper[4869]: I0130 21:47:25.893510 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98ffc112-5fb6-4001-b071-2df7e3d90fd2" path="/var/lib/kubelet/pods/98ffc112-5fb6-4001-b071-2df7e3d90fd2/volumes" Jan 30 21:47:26 crc kubenswrapper[4869]: I0130 21:47:26.532861 4869 generic.go:334] "Generic (PLEG): container finished" podID="72e1dcaa-1805-4157-8fd7-0e00177aaf4c" containerID="c417b8f5398446fceb87e32a8db235e01d61a57bec029c2dbb7f91950e7164d7" exitCode=0 Jan 30 21:47:26 crc kubenswrapper[4869]: I0130 21:47:26.533007 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-86lcz" event={"ID":"72e1dcaa-1805-4157-8fd7-0e00177aaf4c","Type":"ContainerDied","Data":"c417b8f5398446fceb87e32a8db235e01d61a57bec029c2dbb7f91950e7164d7"} Jan 30 21:47:27 crc kubenswrapper[4869]: I0130 21:47:27.421306 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-86lcz" Jan 30 21:47:27 crc kubenswrapper[4869]: I0130 21:47:27.504357 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtn52\" (UniqueName: \"kubernetes.io/projected/72e1dcaa-1805-4157-8fd7-0e00177aaf4c-kube-api-access-gtn52\") pod \"72e1dcaa-1805-4157-8fd7-0e00177aaf4c\" (UID: \"72e1dcaa-1805-4157-8fd7-0e00177aaf4c\") " Jan 30 21:47:27 crc kubenswrapper[4869]: I0130 21:47:27.504407 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e1dcaa-1805-4157-8fd7-0e00177aaf4c-utilities\") pod \"72e1dcaa-1805-4157-8fd7-0e00177aaf4c\" (UID: \"72e1dcaa-1805-4157-8fd7-0e00177aaf4c\") " Jan 30 21:47:27 crc kubenswrapper[4869]: I0130 21:47:27.504447 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e1dcaa-1805-4157-8fd7-0e00177aaf4c-catalog-content\") pod \"72e1dcaa-1805-4157-8fd7-0e00177aaf4c\" (UID: \"72e1dcaa-1805-4157-8fd7-0e00177aaf4c\") " Jan 30 21:47:27 crc kubenswrapper[4869]: I0130 21:47:27.505418 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72e1dcaa-1805-4157-8fd7-0e00177aaf4c-utilities" (OuterVolumeSpecName: "utilities") pod "72e1dcaa-1805-4157-8fd7-0e00177aaf4c" (UID: "72e1dcaa-1805-4157-8fd7-0e00177aaf4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:47:27 crc kubenswrapper[4869]: I0130 21:47:27.509743 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72e1dcaa-1805-4157-8fd7-0e00177aaf4c-kube-api-access-gtn52" (OuterVolumeSpecName: "kube-api-access-gtn52") pod "72e1dcaa-1805-4157-8fd7-0e00177aaf4c" (UID: "72e1dcaa-1805-4157-8fd7-0e00177aaf4c"). InnerVolumeSpecName "kube-api-access-gtn52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:47:27 crc kubenswrapper[4869]: I0130 21:47:27.525703 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72e1dcaa-1805-4157-8fd7-0e00177aaf4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72e1dcaa-1805-4157-8fd7-0e00177aaf4c" (UID: "72e1dcaa-1805-4157-8fd7-0e00177aaf4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:47:27 crc kubenswrapper[4869]: I0130 21:47:27.538817 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-86lcz" event={"ID":"72e1dcaa-1805-4157-8fd7-0e00177aaf4c","Type":"ContainerDied","Data":"7aa22409dc662f9e522fbe0a2a055b7e4f2ea0b9373ce7a325667b4a5b226fa5"} Jan 30 21:47:27 crc kubenswrapper[4869]: I0130 21:47:27.538863 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-86lcz" Jan 30 21:47:27 crc kubenswrapper[4869]: I0130 21:47:27.538871 4869 scope.go:117] "RemoveContainer" containerID="c417b8f5398446fceb87e32a8db235e01d61a57bec029c2dbb7f91950e7164d7" Jan 30 21:47:27 crc kubenswrapper[4869]: I0130 21:47:27.568404 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-86lcz"] Jan 30 21:47:27 crc kubenswrapper[4869]: I0130 21:47:27.572600 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-86lcz"] Jan 30 21:47:27 crc kubenswrapper[4869]: I0130 21:47:27.606369 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtn52\" (UniqueName: \"kubernetes.io/projected/72e1dcaa-1805-4157-8fd7-0e00177aaf4c-kube-api-access-gtn52\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:27 crc kubenswrapper[4869]: I0130 21:47:27.606425 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e1dcaa-1805-4157-8fd7-0e00177aaf4c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:27 crc kubenswrapper[4869]: I0130 21:47:27.606438 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e1dcaa-1805-4157-8fd7-0e00177aaf4c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:27 crc kubenswrapper[4869]: I0130 21:47:27.885436 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72e1dcaa-1805-4157-8fd7-0e00177aaf4c" path="/var/lib/kubelet/pods/72e1dcaa-1805-4157-8fd7-0e00177aaf4c/volumes" Jan 30 21:47:28 crc kubenswrapper[4869]: I0130 21:47:28.940455 4869 scope.go:117] "RemoveContainer" containerID="d36db8a5338d5ea2231a6e02d029244e917d10976073eda5f183627eea086bbe" Jan 30 21:47:29 crc kubenswrapper[4869]: I0130 21:47:29.940914 4869 scope.go:117] "RemoveContainer" containerID="c1e086b18182e15352da21f44e1778ef5b89e5673ec387e66e954209de1ae9ff" Jan 30 21:47:32 crc kubenswrapper[4869]: I0130 21:47:32.576595 4869 generic.go:334] "Generic (PLEG): container finished" podID="b394f841-ea61-41c7-9b4b-7ad185073b70" containerID="73cd7cddb6afbd0a6ee910279ea8c2bb889488e711d0ac2d0e11e5803001d552" exitCode=0 Jan 30 21:47:32 crc kubenswrapper[4869]: I0130 21:47:32.576705 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr69r" event={"ID":"b394f841-ea61-41c7-9b4b-7ad185073b70","Type":"ContainerDied","Data":"73cd7cddb6afbd0a6ee910279ea8c2bb889488e711d0ac2d0e11e5803001d552"} Jan 30 21:47:34 crc kubenswrapper[4869]: I0130 21:47:34.589483 4869 generic.go:334] "Generic (PLEG): container finished" podID="85baedd6-e513-4741-98cf-ef39cfda8ecb" containerID="ce48d518b58917d54b5d9e0b357213d43914aecc73717585aed5d040e2aa2847" exitCode=0 Jan 30 21:47:34 crc kubenswrapper[4869]: I0130 21:47:34.589671 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5fzr" event={"ID":"85baedd6-e513-4741-98cf-ef39cfda8ecb","Type":"ContainerDied","Data":"ce48d518b58917d54b5d9e0b357213d43914aecc73717585aed5d040e2aa2847"} Jan 30 21:47:34 crc kubenswrapper[4869]: I0130 21:47:34.592612 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f93f920-68d0-41d1-8a20-ca174eda2fcd" containerID="473808dffd4eab7365c1bbce716534ab0ff0b93b5b39c0d278b683a954aa947d" exitCode=0 Jan 30 21:47:34 crc kubenswrapper[4869]: I0130 21:47:34.592673 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkgvk" event={"ID":"2f93f920-68d0-41d1-8a20-ca174eda2fcd","Type":"ContainerDied","Data":"473808dffd4eab7365c1bbce716534ab0ff0b93b5b39c0d278b683a954aa947d"} Jan 30 21:47:34 crc kubenswrapper[4869]: I0130 21:47:34.595034 4869 generic.go:334] "Generic (PLEG): container finished" podID="4ebfcb0e-58a5-4ab1-894f-1a6093921531" containerID="69dc1ca55f3d35454428f0e4223daf0a6b3939cce5305132a6aa157e59d7b006" exitCode=0 Jan 30 21:47:34 crc kubenswrapper[4869]: I0130 21:47:34.595080 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6v7j4" event={"ID":"4ebfcb0e-58a5-4ab1-894f-1a6093921531","Type":"ContainerDied","Data":"69dc1ca55f3d35454428f0e4223daf0a6b3939cce5305132a6aa157e59d7b006"} Jan 30 21:47:34 crc kubenswrapper[4869]: I0130 21:47:34.600073 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr69r" event={"ID":"b394f841-ea61-41c7-9b4b-7ad185073b70","Type":"ContainerStarted","Data":"2f1656b85d3ed469536399ca659885e08fcf346ea8c31f57fca6519a697b1b5d"} Jan 30 21:47:34 crc kubenswrapper[4869]: I0130 21:47:34.624797 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rr69r" podStartSLOduration=3.172408418 podStartE2EDuration="1m34.624782013s" podCreationTimestamp="2026-01-30 21:46:00 +0000 UTC" firstStartedPulling="2026-01-30 21:46:02.679690132 +0000 UTC m=+163.565448167" lastFinishedPulling="2026-01-30 21:47:34.132063747 +0000 UTC m=+255.017821762" observedRunningTime="2026-01-30 21:47:34.623374009 +0000 UTC m=+255.509132054" watchObservedRunningTime="2026-01-30 21:47:34.624782013 +0000 UTC m=+255.510540038" Jan 30 21:47:36 crc kubenswrapper[4869]: I0130 21:47:36.611414 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkgvk" event={"ID":"2f93f920-68d0-41d1-8a20-ca174eda2fcd","Type":"ContainerStarted","Data":"f3f0495374d3dab58c2f3ac22b1916ce0b81a9267dda7d9695c37c57880ea45c"} Jan 30 21:47:36 crc kubenswrapper[4869]: I0130 21:47:36.614007 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6v7j4" event={"ID":"4ebfcb0e-58a5-4ab1-894f-1a6093921531","Type":"ContainerStarted","Data":"5d6897835c78122de172fcbf0cf20ab7d1366470b8a90d6f577767f51c23b338"} Jan 30 21:47:36 crc kubenswrapper[4869]: I0130 21:47:36.618248 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5fzr" event={"ID":"85baedd6-e513-4741-98cf-ef39cfda8ecb","Type":"ContainerStarted","Data":"ea217b3e5b2bb79b36d6a5337f907054956205f7ba5b78eea834e3d9c909f649"} Jan 30 21:47:36 crc kubenswrapper[4869]: I0130 21:47:36.636203 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jkgvk" podStartSLOduration=3.433518904 podStartE2EDuration="1m37.636182381s" podCreationTimestamp="2026-01-30 21:45:59 +0000 UTC" firstStartedPulling="2026-01-30 21:46:01.585325246 +0000 UTC m=+162.471083271" lastFinishedPulling="2026-01-30 21:47:35.787988723 +0000 UTC m=+256.673746748" observedRunningTime="2026-01-30 21:47:36.632000687 +0000 UTC m=+257.517758712" watchObservedRunningTime="2026-01-30 21:47:36.636182381 +0000 UTC m=+257.521940406" Jan 30 21:47:36 crc kubenswrapper[4869]: I0130 21:47:36.651087 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w5fzr" podStartSLOduration=3.887361247 podStartE2EDuration="1m34.651067545s" podCreationTimestamp="2026-01-30 21:46:02 +0000 UTC" firstStartedPulling="2026-01-30 21:46:04.747849574 +0000 UTC m=+165.633607589" lastFinishedPulling="2026-01-30 21:47:35.511555862 +0000 UTC m=+256.397313887" observedRunningTime="2026-01-30 21:47:36.650672543 +0000 UTC m=+257.536430568" watchObservedRunningTime="2026-01-30 21:47:36.651067545 +0000 UTC m=+257.536825570" Jan 30 21:47:36 crc kubenswrapper[4869]: I0130 21:47:36.671832 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6v7j4" podStartSLOduration=4.238806527 podStartE2EDuration="1m38.671811176s" podCreationTimestamp="2026-01-30 21:45:58 +0000 UTC" firstStartedPulling="2026-01-30 21:46:01.585714408 +0000 UTC m=+162.471472433" lastFinishedPulling="2026-01-30 21:47:36.018719057 +0000 UTC m=+256.904477082" observedRunningTime="2026-01-30 21:47:36.669181342 +0000 UTC m=+257.554939377" watchObservedRunningTime="2026-01-30 21:47:36.671811176 +0000 UTC m=+257.557569201" Jan 30 21:47:37 crc kubenswrapper[4869]: I0130 21:47:37.712548 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76496656b7-57rfw"] Jan 30 21:47:37 crc kubenswrapper[4869]: I0130 21:47:37.712765 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" podUID="edff57f9-27da-47e1-9d1b-174d22fcced3" containerName="controller-manager" containerID="cri-o://041a4356f58c689465a3dd80163d1a47ce8c5e45a1bf4e221ee395ca77abb329" gracePeriod=30 Jan 30 21:47:37 crc kubenswrapper[4869]: I0130 21:47:37.803077 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq"] Jan 30 21:47:37 crc kubenswrapper[4869]: I0130 21:47:37.803502 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" podUID="d442182f-dccd-43c0-a24f-e13064ee94b5" containerName="route-controller-manager" containerID="cri-o://bed447bd830f6a7ae52d45b3540241619893ebbe0b37b015c18dd5e750f1230c" gracePeriod=30 Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.634467 4869 generic.go:334] "Generic (PLEG): container finished" podID="edff57f9-27da-47e1-9d1b-174d22fcced3" containerID="041a4356f58c689465a3dd80163d1a47ce8c5e45a1bf4e221ee395ca77abb329" exitCode=0 Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.634865 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" event={"ID":"edff57f9-27da-47e1-9d1b-174d22fcced3","Type":"ContainerDied","Data":"041a4356f58c689465a3dd80163d1a47ce8c5e45a1bf4e221ee395ca77abb329"} Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.636540 4869 generic.go:334] "Generic (PLEG): container finished" podID="d442182f-dccd-43c0-a24f-e13064ee94b5" containerID="bed447bd830f6a7ae52d45b3540241619893ebbe0b37b015c18dd5e750f1230c" exitCode=0 Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.636577 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" event={"ID":"d442182f-dccd-43c0-a24f-e13064ee94b5","Type":"ContainerDied","Data":"bed447bd830f6a7ae52d45b3540241619893ebbe0b37b015c18dd5e750f1230c"} Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.868459 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.895430 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5"] Jan 30 21:47:38 crc kubenswrapper[4869]: E0130 21:47:38.895661 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e1dcaa-1805-4157-8fd7-0e00177aaf4c" containerName="registry-server" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.895674 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e1dcaa-1805-4157-8fd7-0e00177aaf4c" containerName="registry-server" Jan 30 21:47:38 crc kubenswrapper[4869]: E0130 21:47:38.895684 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98ffc112-5fb6-4001-b071-2df7e3d90fd2" containerName="registry-server" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.895690 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="98ffc112-5fb6-4001-b071-2df7e3d90fd2" containerName="registry-server" Jan 30 21:47:38 crc kubenswrapper[4869]: E0130 21:47:38.895706 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b828b21-57e4-4990-b9dd-184fbfc06736" containerName="pruner" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.895713 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b828b21-57e4-4990-b9dd-184fbfc06736" containerName="pruner" Jan 30 21:47:38 crc kubenswrapper[4869]: E0130 21:47:38.895721 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e1dcaa-1805-4157-8fd7-0e00177aaf4c" containerName="extract-utilities" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.895728 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e1dcaa-1805-4157-8fd7-0e00177aaf4c" containerName="extract-utilities" Jan 30 21:47:38 crc kubenswrapper[4869]: E0130 21:47:38.895736 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e1dcaa-1805-4157-8fd7-0e00177aaf4c" containerName="extract-content" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.895741 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e1dcaa-1805-4157-8fd7-0e00177aaf4c" containerName="extract-content" Jan 30 21:47:38 crc kubenswrapper[4869]: E0130 21:47:38.895753 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d442182f-dccd-43c0-a24f-e13064ee94b5" containerName="route-controller-manager" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.895761 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d442182f-dccd-43c0-a24f-e13064ee94b5" containerName="route-controller-manager" Jan 30 21:47:38 crc kubenswrapper[4869]: E0130 21:47:38.895769 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98ffc112-5fb6-4001-b071-2df7e3d90fd2" containerName="extract-content" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.895774 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="98ffc112-5fb6-4001-b071-2df7e3d90fd2" containerName="extract-content" Jan 30 21:47:38 crc kubenswrapper[4869]: E0130 21:47:38.895782 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98ffc112-5fb6-4001-b071-2df7e3d90fd2" containerName="extract-utilities" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.895788 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="98ffc112-5fb6-4001-b071-2df7e3d90fd2" containerName="extract-utilities" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.895872 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="98ffc112-5fb6-4001-b071-2df7e3d90fd2" containerName="registry-server" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.895880 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d442182f-dccd-43c0-a24f-e13064ee94b5" containerName="route-controller-manager" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.895895 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b828b21-57e4-4990-b9dd-184fbfc06736" containerName="pruner" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.895934 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="72e1dcaa-1805-4157-8fd7-0e00177aaf4c" containerName="registry-server" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.896270 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.909950 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5"] Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.956780 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flx6c\" (UniqueName: \"kubernetes.io/projected/d442182f-dccd-43c0-a24f-e13064ee94b5-kube-api-access-flx6c\") pod \"d442182f-dccd-43c0-a24f-e13064ee94b5\" (UID: \"d442182f-dccd-43c0-a24f-e13064ee94b5\") " Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.956856 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d442182f-dccd-43c0-a24f-e13064ee94b5-client-ca\") pod \"d442182f-dccd-43c0-a24f-e13064ee94b5\" (UID: \"d442182f-dccd-43c0-a24f-e13064ee94b5\") " Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.956917 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d442182f-dccd-43c0-a24f-e13064ee94b5-config\") pod \"d442182f-dccd-43c0-a24f-e13064ee94b5\" (UID: \"d442182f-dccd-43c0-a24f-e13064ee94b5\") " Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.956971 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d442182f-dccd-43c0-a24f-e13064ee94b5-serving-cert\") pod \"d442182f-dccd-43c0-a24f-e13064ee94b5\" (UID: \"d442182f-dccd-43c0-a24f-e13064ee94b5\") " Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.957147 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abd2a4d5-cfb7-4ae5-865c-e95650925df8-client-ca\") pod \"route-controller-manager-7fbc9cd55b-dqfs5\" (UID: \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\") " pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.957181 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abd2a4d5-cfb7-4ae5-865c-e95650925df8-config\") pod \"route-controller-manager-7fbc9cd55b-dqfs5\" (UID: \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\") " pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.957232 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trsx2\" (UniqueName: \"kubernetes.io/projected/abd2a4d5-cfb7-4ae5-865c-e95650925df8-kube-api-access-trsx2\") pod \"route-controller-manager-7fbc9cd55b-dqfs5\" (UID: \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\") " pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.957285 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abd2a4d5-cfb7-4ae5-865c-e95650925df8-serving-cert\") pod \"route-controller-manager-7fbc9cd55b-dqfs5\" (UID: \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\") " pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.958079 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d442182f-dccd-43c0-a24f-e13064ee94b5-config" (OuterVolumeSpecName: "config") pod "d442182f-dccd-43c0-a24f-e13064ee94b5" (UID: "d442182f-dccd-43c0-a24f-e13064ee94b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.958592 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d442182f-dccd-43c0-a24f-e13064ee94b5-client-ca" (OuterVolumeSpecName: "client-ca") pod "d442182f-dccd-43c0-a24f-e13064ee94b5" (UID: "d442182f-dccd-43c0-a24f-e13064ee94b5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.963598 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d442182f-dccd-43c0-a24f-e13064ee94b5-kube-api-access-flx6c" (OuterVolumeSpecName: "kube-api-access-flx6c") pod "d442182f-dccd-43c0-a24f-e13064ee94b5" (UID: "d442182f-dccd-43c0-a24f-e13064ee94b5"). InnerVolumeSpecName "kube-api-access-flx6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:47:38 crc kubenswrapper[4869]: I0130 21:47:38.965416 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d442182f-dccd-43c0-a24f-e13064ee94b5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d442182f-dccd-43c0-a24f-e13064ee94b5" (UID: "d442182f-dccd-43c0-a24f-e13064ee94b5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.058931 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abd2a4d5-cfb7-4ae5-865c-e95650925df8-serving-cert\") pod \"route-controller-manager-7fbc9cd55b-dqfs5\" (UID: \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\") " pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.059032 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abd2a4d5-cfb7-4ae5-865c-e95650925df8-client-ca\") pod \"route-controller-manager-7fbc9cd55b-dqfs5\" (UID: \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\") " pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.059097 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abd2a4d5-cfb7-4ae5-865c-e95650925df8-config\") pod \"route-controller-manager-7fbc9cd55b-dqfs5\" (UID: \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\") " pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.059138 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trsx2\" (UniqueName: \"kubernetes.io/projected/abd2a4d5-cfb7-4ae5-865c-e95650925df8-kube-api-access-trsx2\") pod \"route-controller-manager-7fbc9cd55b-dqfs5\" (UID: \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\") " pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.059327 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flx6c\" (UniqueName: \"kubernetes.io/projected/d442182f-dccd-43c0-a24f-e13064ee94b5-kube-api-access-flx6c\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.059512 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d442182f-dccd-43c0-a24f-e13064ee94b5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.059582 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d442182f-dccd-43c0-a24f-e13064ee94b5-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.059597 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d442182f-dccd-43c0-a24f-e13064ee94b5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.060670 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abd2a4d5-cfb7-4ae5-865c-e95650925df8-config\") pod \"route-controller-manager-7fbc9cd55b-dqfs5\" (UID: \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\") " pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.114414 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abd2a4d5-cfb7-4ae5-865c-e95650925df8-client-ca\") pod \"route-controller-manager-7fbc9cd55b-dqfs5\" (UID: \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\") " pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.114597 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abd2a4d5-cfb7-4ae5-865c-e95650925df8-serving-cert\") pod \"route-controller-manager-7fbc9cd55b-dqfs5\" (UID: \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\") " pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.121754 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trsx2\" (UniqueName: \"kubernetes.io/projected/abd2a4d5-cfb7-4ae5-865c-e95650925df8-kube-api-access-trsx2\") pod \"route-controller-manager-7fbc9cd55b-dqfs5\" (UID: \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\") " pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.214442 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.348663 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6v7j4" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.349027 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6v7j4" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.400263 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6v7j4" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.628383 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5"] Jan 30 21:47:39 crc kubenswrapper[4869]: W0130 21:47:39.631220 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabd2a4d5_cfb7_4ae5_865c_e95650925df8.slice/crio-f5c44b265ec9e9995f2668d217931298e21b0c2b9043d4055fb095a0a7967c98 WatchSource:0}: Error finding container f5c44b265ec9e9995f2668d217931298e21b0c2b9043d4055fb095a0a7967c98: Status 404 returned error can't find the container with id f5c44b265ec9e9995f2668d217931298e21b0c2b9043d4055fb095a0a7967c98 Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.646500 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" event={"ID":"abd2a4d5-cfb7-4ae5-865c-e95650925df8","Type":"ContainerStarted","Data":"f5c44b265ec9e9995f2668d217931298e21b0c2b9043d4055fb095a0a7967c98"} Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.648124 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" event={"ID":"d442182f-dccd-43c0-a24f-e13064ee94b5","Type":"ContainerDied","Data":"866b8b24af4f81067dad9c5ebf034f902139b1daf313d3b55068faab4afdd1a1"} Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.648162 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.648170 4869 scope.go:117] "RemoveContainer" containerID="bed447bd830f6a7ae52d45b3540241619893ebbe0b37b015c18dd5e750f1230c" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.676589 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq"] Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.686945 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c7d7f8749-dqjfq"] Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.755423 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jkgvk" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.755735 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jkgvk" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.793314 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jkgvk" Jan 30 21:47:39 crc kubenswrapper[4869]: I0130 21:47:39.883076 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d442182f-dccd-43c0-a24f-e13064ee94b5" path="/var/lib/kubelet/pods/d442182f-dccd-43c0-a24f-e13064ee94b5/volumes" Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.103022 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.281621 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77g7s\" (UniqueName: \"kubernetes.io/projected/edff57f9-27da-47e1-9d1b-174d22fcced3-kube-api-access-77g7s\") pod \"edff57f9-27da-47e1-9d1b-174d22fcced3\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.281728 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/edff57f9-27da-47e1-9d1b-174d22fcced3-proxy-ca-bundles\") pod \"edff57f9-27da-47e1-9d1b-174d22fcced3\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.281747 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edff57f9-27da-47e1-9d1b-174d22fcced3-serving-cert\") pod \"edff57f9-27da-47e1-9d1b-174d22fcced3\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.281796 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edff57f9-27da-47e1-9d1b-174d22fcced3-config\") pod \"edff57f9-27da-47e1-9d1b-174d22fcced3\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.281817 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edff57f9-27da-47e1-9d1b-174d22fcced3-client-ca\") pod \"edff57f9-27da-47e1-9d1b-174d22fcced3\" (UID: \"edff57f9-27da-47e1-9d1b-174d22fcced3\") " Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.282687 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edff57f9-27da-47e1-9d1b-174d22fcced3-config" (OuterVolumeSpecName: "config") pod "edff57f9-27da-47e1-9d1b-174d22fcced3" (UID: "edff57f9-27da-47e1-9d1b-174d22fcced3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.282773 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edff57f9-27da-47e1-9d1b-174d22fcced3-client-ca" (OuterVolumeSpecName: "client-ca") pod "edff57f9-27da-47e1-9d1b-174d22fcced3" (UID: "edff57f9-27da-47e1-9d1b-174d22fcced3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.283035 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edff57f9-27da-47e1-9d1b-174d22fcced3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "edff57f9-27da-47e1-9d1b-174d22fcced3" (UID: "edff57f9-27da-47e1-9d1b-174d22fcced3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.289460 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edff57f9-27da-47e1-9d1b-174d22fcced3-kube-api-access-77g7s" (OuterVolumeSpecName: "kube-api-access-77g7s") pod "edff57f9-27da-47e1-9d1b-174d22fcced3" (UID: "edff57f9-27da-47e1-9d1b-174d22fcced3"). InnerVolumeSpecName "kube-api-access-77g7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.289495 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edff57f9-27da-47e1-9d1b-174d22fcced3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "edff57f9-27da-47e1-9d1b-174d22fcced3" (UID: "edff57f9-27da-47e1-9d1b-174d22fcced3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.383770 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/edff57f9-27da-47e1-9d1b-174d22fcced3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.383800 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edff57f9-27da-47e1-9d1b-174d22fcced3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.383814 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edff57f9-27da-47e1-9d1b-174d22fcced3-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.383825 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edff57f9-27da-47e1-9d1b-174d22fcced3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.383836 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77g7s\" (UniqueName: \"kubernetes.io/projected/edff57f9-27da-47e1-9d1b-174d22fcced3-kube-api-access-77g7s\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.654649 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.654684 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76496656b7-57rfw" event={"ID":"edff57f9-27da-47e1-9d1b-174d22fcced3","Type":"ContainerDied","Data":"8c7b5468025e5f5554b34f649e4fe9bfd17fa0c1950d5cd4d824456e19f48caf"} Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.654758 4869 scope.go:117] "RemoveContainer" containerID="041a4356f58c689465a3dd80163d1a47ce8c5e45a1bf4e221ee395ca77abb329" Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.688681 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76496656b7-57rfw"] Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.693259 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76496656b7-57rfw"] Jan 30 21:47:40 crc kubenswrapper[4869]: I0130 21:47:40.699202 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jkgvk" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.281531 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rr69r" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.281680 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rr69r" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.332541 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rr69r" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.577524 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c6bc75d49-flzfc"] Jan 30 21:47:41 crc kubenswrapper[4869]: E0130 21:47:41.577944 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edff57f9-27da-47e1-9d1b-174d22fcced3" containerName="controller-manager" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.577973 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="edff57f9-27da-47e1-9d1b-174d22fcced3" containerName="controller-manager" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.578115 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="edff57f9-27da-47e1-9d1b-174d22fcced3" containerName="controller-manager" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.578771 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.580527 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.580747 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.581443 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.581553 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.581609 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.582395 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.592265 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c6bc75d49-flzfc"] Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.597802 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.600794 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be261ab9-3f72-4696-8357-f2567b4f20f8-config\") pod \"controller-manager-7c6bc75d49-flzfc\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.600941 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be261ab9-3f72-4696-8357-f2567b4f20f8-serving-cert\") pod \"controller-manager-7c6bc75d49-flzfc\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.600994 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjs2f\" (UniqueName: \"kubernetes.io/projected/be261ab9-3f72-4696-8357-f2567b4f20f8-kube-api-access-cjs2f\") pod \"controller-manager-7c6bc75d49-flzfc\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.601050 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be261ab9-3f72-4696-8357-f2567b4f20f8-proxy-ca-bundles\") pod \"controller-manager-7c6bc75d49-flzfc\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.601097 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be261ab9-3f72-4696-8357-f2567b4f20f8-client-ca\") pod \"controller-manager-7c6bc75d49-flzfc\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.664061 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" event={"ID":"abd2a4d5-cfb7-4ae5-865c-e95650925df8","Type":"ContainerStarted","Data":"09cdcde61ff427ce25759af098c243e3d8390adc84bb234dc2bf6549d307f86d"} Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.664320 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.674328 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.689186 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" podStartSLOduration=4.689165586 podStartE2EDuration="4.689165586s" podCreationTimestamp="2026-01-30 21:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:47:41.688203934 +0000 UTC m=+262.573961979" watchObservedRunningTime="2026-01-30 21:47:41.689165586 +0000 UTC m=+262.574923611" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.702247 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be261ab9-3f72-4696-8357-f2567b4f20f8-client-ca\") pod \"controller-manager-7c6bc75d49-flzfc\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.702359 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be261ab9-3f72-4696-8357-f2567b4f20f8-config\") pod \"controller-manager-7c6bc75d49-flzfc\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.702387 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be261ab9-3f72-4696-8357-f2567b4f20f8-serving-cert\") pod \"controller-manager-7c6bc75d49-flzfc\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.702413 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjs2f\" (UniqueName: \"kubernetes.io/projected/be261ab9-3f72-4696-8357-f2567b4f20f8-kube-api-access-cjs2f\") pod \"controller-manager-7c6bc75d49-flzfc\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.702459 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be261ab9-3f72-4696-8357-f2567b4f20f8-proxy-ca-bundles\") pod \"controller-manager-7c6bc75d49-flzfc\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.703449 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be261ab9-3f72-4696-8357-f2567b4f20f8-client-ca\") pod \"controller-manager-7c6bc75d49-flzfc\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.704101 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be261ab9-3f72-4696-8357-f2567b4f20f8-config\") pod \"controller-manager-7c6bc75d49-flzfc\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.704549 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be261ab9-3f72-4696-8357-f2567b4f20f8-proxy-ca-bundles\") pod \"controller-manager-7c6bc75d49-flzfc\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.711218 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be261ab9-3f72-4696-8357-f2567b4f20f8-serving-cert\") pod \"controller-manager-7c6bc75d49-flzfc\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.722251 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rr69r" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.722254 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjs2f\" (UniqueName: \"kubernetes.io/projected/be261ab9-3f72-4696-8357-f2567b4f20f8-kube-api-access-cjs2f\") pod \"controller-manager-7c6bc75d49-flzfc\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.884019 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edff57f9-27da-47e1-9d1b-174d22fcced3" path="/var/lib/kubelet/pods/edff57f9-27da-47e1-9d1b-174d22fcced3/volumes" Jan 30 21:47:41 crc kubenswrapper[4869]: I0130 21:47:41.899201 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:42 crc kubenswrapper[4869]: I0130 21:47:42.331021 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c6bc75d49-flzfc"] Jan 30 21:47:42 crc kubenswrapper[4869]: W0130 21:47:42.335743 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe261ab9_3f72_4696_8357_f2567b4f20f8.slice/crio-7accb92361236c9f30115595acd25d649f24534f05d2d1dad1d167cbfd61d417 WatchSource:0}: Error finding container 7accb92361236c9f30115595acd25d649f24534f05d2d1dad1d167cbfd61d417: Status 404 returned error can't find the container with id 7accb92361236c9f30115595acd25d649f24534f05d2d1dad1d167cbfd61d417 Jan 30 21:47:42 crc kubenswrapper[4869]: I0130 21:47:42.424590 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jkgvk"] Jan 30 21:47:42 crc kubenswrapper[4869]: I0130 21:47:42.672775 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" event={"ID":"be261ab9-3f72-4696-8357-f2567b4f20f8","Type":"ContainerStarted","Data":"502c817202ae2c832ce986979663b04d643ae05dd0110faabfb0836f61015f14"} Jan 30 21:47:42 crc kubenswrapper[4869]: I0130 21:47:42.672847 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" event={"ID":"be261ab9-3f72-4696-8357-f2567b4f20f8","Type":"ContainerStarted","Data":"7accb92361236c9f30115595acd25d649f24534f05d2d1dad1d167cbfd61d417"} Jan 30 21:47:42 crc kubenswrapper[4869]: I0130 21:47:42.673287 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jkgvk" podUID="2f93f920-68d0-41d1-8a20-ca174eda2fcd" containerName="registry-server" containerID="cri-o://f3f0495374d3dab58c2f3ac22b1916ce0b81a9267dda7d9695c37c57880ea45c" gracePeriod=2 Jan 30 21:47:42 crc kubenswrapper[4869]: I0130 21:47:42.703047 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" podStartSLOduration=5.703028769 podStartE2EDuration="5.703028769s" podCreationTimestamp="2026-01-30 21:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:47:42.699340671 +0000 UTC m=+263.585098696" watchObservedRunningTime="2026-01-30 21:47:42.703028769 +0000 UTC m=+263.588786794" Jan 30 21:47:42 crc kubenswrapper[4869]: I0130 21:47:42.940033 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w5fzr" Jan 30 21:47:42 crc kubenswrapper[4869]: I0130 21:47:42.940197 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w5fzr" Jan 30 21:47:42 crc kubenswrapper[4869]: I0130 21:47:42.983436 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w5fzr" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.124807 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jkgvk" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.228106 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lldpt\" (UniqueName: \"kubernetes.io/projected/2f93f920-68d0-41d1-8a20-ca174eda2fcd-kube-api-access-lldpt\") pod \"2f93f920-68d0-41d1-8a20-ca174eda2fcd\" (UID: \"2f93f920-68d0-41d1-8a20-ca174eda2fcd\") " Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.228198 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f93f920-68d0-41d1-8a20-ca174eda2fcd-utilities\") pod \"2f93f920-68d0-41d1-8a20-ca174eda2fcd\" (UID: \"2f93f920-68d0-41d1-8a20-ca174eda2fcd\") " Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.228236 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f93f920-68d0-41d1-8a20-ca174eda2fcd-catalog-content\") pod \"2f93f920-68d0-41d1-8a20-ca174eda2fcd\" (UID: \"2f93f920-68d0-41d1-8a20-ca174eda2fcd\") " Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.229524 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f93f920-68d0-41d1-8a20-ca174eda2fcd-utilities" (OuterVolumeSpecName: "utilities") pod "2f93f920-68d0-41d1-8a20-ca174eda2fcd" (UID: "2f93f920-68d0-41d1-8a20-ca174eda2fcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.236710 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f93f920-68d0-41d1-8a20-ca174eda2fcd-kube-api-access-lldpt" (OuterVolumeSpecName: "kube-api-access-lldpt") pod "2f93f920-68d0-41d1-8a20-ca174eda2fcd" (UID: "2f93f920-68d0-41d1-8a20-ca174eda2fcd"). InnerVolumeSpecName "kube-api-access-lldpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.279799 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f93f920-68d0-41d1-8a20-ca174eda2fcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f93f920-68d0-41d1-8a20-ca174eda2fcd" (UID: "2f93f920-68d0-41d1-8a20-ca174eda2fcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.330237 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lldpt\" (UniqueName: \"kubernetes.io/projected/2f93f920-68d0-41d1-8a20-ca174eda2fcd-kube-api-access-lldpt\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.330283 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f93f920-68d0-41d1-8a20-ca174eda2fcd-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.330297 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f93f920-68d0-41d1-8a20-ca174eda2fcd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.680233 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f93f920-68d0-41d1-8a20-ca174eda2fcd" containerID="f3f0495374d3dab58c2f3ac22b1916ce0b81a9267dda7d9695c37c57880ea45c" exitCode=0 Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.680362 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkgvk" event={"ID":"2f93f920-68d0-41d1-8a20-ca174eda2fcd","Type":"ContainerDied","Data":"f3f0495374d3dab58c2f3ac22b1916ce0b81a9267dda7d9695c37c57880ea45c"} Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.680394 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkgvk" event={"ID":"2f93f920-68d0-41d1-8a20-ca174eda2fcd","Type":"ContainerDied","Data":"8d0cc067cb5f2446063d684c8da24bf903124cb4c6084336a0aa5230411234a7"} Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.680419 4869 scope.go:117] "RemoveContainer" containerID="f3f0495374d3dab58c2f3ac22b1916ce0b81a9267dda7d9695c37c57880ea45c" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.680979 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jkgvk" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.681810 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.686300 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.701234 4869 scope.go:117] "RemoveContainer" containerID="473808dffd4eab7365c1bbce716534ab0ff0b93b5b39c0d278b683a954aa947d" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.724695 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jkgvk"] Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.726674 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jkgvk"] Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.738603 4869 scope.go:117] "RemoveContainer" containerID="59a37c2e91b00a785cdb839e037e3619dcefa814a42b76761168156aa1a86761" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.740571 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w5fzr" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.765343 4869 scope.go:117] "RemoveContainer" containerID="f3f0495374d3dab58c2f3ac22b1916ce0b81a9267dda7d9695c37c57880ea45c" Jan 30 21:47:43 crc kubenswrapper[4869]: E0130 21:47:43.765849 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3f0495374d3dab58c2f3ac22b1916ce0b81a9267dda7d9695c37c57880ea45c\": container with ID starting with f3f0495374d3dab58c2f3ac22b1916ce0b81a9267dda7d9695c37c57880ea45c not found: ID does not exist" containerID="f3f0495374d3dab58c2f3ac22b1916ce0b81a9267dda7d9695c37c57880ea45c" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.765884 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3f0495374d3dab58c2f3ac22b1916ce0b81a9267dda7d9695c37c57880ea45c"} err="failed to get container status \"f3f0495374d3dab58c2f3ac22b1916ce0b81a9267dda7d9695c37c57880ea45c\": rpc error: code = NotFound desc = could not find container \"f3f0495374d3dab58c2f3ac22b1916ce0b81a9267dda7d9695c37c57880ea45c\": container with ID starting with f3f0495374d3dab58c2f3ac22b1916ce0b81a9267dda7d9695c37c57880ea45c not found: ID does not exist" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.765950 4869 scope.go:117] "RemoveContainer" containerID="473808dffd4eab7365c1bbce716534ab0ff0b93b5b39c0d278b683a954aa947d" Jan 30 21:47:43 crc kubenswrapper[4869]: E0130 21:47:43.768375 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"473808dffd4eab7365c1bbce716534ab0ff0b93b5b39c0d278b683a954aa947d\": container with ID starting with 473808dffd4eab7365c1bbce716534ab0ff0b93b5b39c0d278b683a954aa947d not found: ID does not exist" containerID="473808dffd4eab7365c1bbce716534ab0ff0b93b5b39c0d278b683a954aa947d" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.768481 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"473808dffd4eab7365c1bbce716534ab0ff0b93b5b39c0d278b683a954aa947d"} err="failed to get container status \"473808dffd4eab7365c1bbce716534ab0ff0b93b5b39c0d278b683a954aa947d\": rpc error: code = NotFound desc = could not find container \"473808dffd4eab7365c1bbce716534ab0ff0b93b5b39c0d278b683a954aa947d\": container with ID starting with 473808dffd4eab7365c1bbce716534ab0ff0b93b5b39c0d278b683a954aa947d not found: ID does not exist" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.768585 4869 scope.go:117] "RemoveContainer" containerID="59a37c2e91b00a785cdb839e037e3619dcefa814a42b76761168156aa1a86761" Jan 30 21:47:43 crc kubenswrapper[4869]: E0130 21:47:43.769431 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59a37c2e91b00a785cdb839e037e3619dcefa814a42b76761168156aa1a86761\": container with ID starting with 59a37c2e91b00a785cdb839e037e3619dcefa814a42b76761168156aa1a86761 not found: ID does not exist" containerID="59a37c2e91b00a785cdb839e037e3619dcefa814a42b76761168156aa1a86761" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.769556 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59a37c2e91b00a785cdb839e037e3619dcefa814a42b76761168156aa1a86761"} err="failed to get container status \"59a37c2e91b00a785cdb839e037e3619dcefa814a42b76761168156aa1a86761\": rpc error: code = NotFound desc = could not find container \"59a37c2e91b00a785cdb839e037e3619dcefa814a42b76761168156aa1a86761\": container with ID starting with 59a37c2e91b00a785cdb839e037e3619dcefa814a42b76761168156aa1a86761 not found: ID does not exist" Jan 30 21:47:43 crc kubenswrapper[4869]: I0130 21:47:43.885686 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f93f920-68d0-41d1-8a20-ca174eda2fcd" path="/var/lib/kubelet/pods/2f93f920-68d0-41d1-8a20-ca174eda2fcd/volumes" Jan 30 21:47:46 crc kubenswrapper[4869]: I0130 21:47:46.224249 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w5fzr"] Jan 30 21:47:46 crc kubenswrapper[4869]: I0130 21:47:46.696141 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w5fzr" podUID="85baedd6-e513-4741-98cf-ef39cfda8ecb" containerName="registry-server" containerID="cri-o://ea217b3e5b2bb79b36d6a5337f907054956205f7ba5b78eea834e3d9c909f649" gracePeriod=2 Jan 30 21:47:48 crc kubenswrapper[4869]: I0130 21:47:48.715055 4869 generic.go:334] "Generic (PLEG): container finished" podID="85baedd6-e513-4741-98cf-ef39cfda8ecb" containerID="ea217b3e5b2bb79b36d6a5337f907054956205f7ba5b78eea834e3d9c909f649" exitCode=0 Jan 30 21:47:48 crc kubenswrapper[4869]: I0130 21:47:48.715277 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5fzr" event={"ID":"85baedd6-e513-4741-98cf-ef39cfda8ecb","Type":"ContainerDied","Data":"ea217b3e5b2bb79b36d6a5337f907054956205f7ba5b78eea834e3d9c909f649"} Jan 30 21:47:49 crc kubenswrapper[4869]: I0130 21:47:49.294418 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w5fzr" Jan 30 21:47:49 crc kubenswrapper[4869]: I0130 21:47:49.385564 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6v7j4" Jan 30 21:47:49 crc kubenswrapper[4869]: I0130 21:47:49.445341 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85baedd6-e513-4741-98cf-ef39cfda8ecb-catalog-content\") pod \"85baedd6-e513-4741-98cf-ef39cfda8ecb\" (UID: \"85baedd6-e513-4741-98cf-ef39cfda8ecb\") " Jan 30 21:47:49 crc kubenswrapper[4869]: I0130 21:47:49.445560 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85baedd6-e513-4741-98cf-ef39cfda8ecb-utilities\") pod \"85baedd6-e513-4741-98cf-ef39cfda8ecb\" (UID: \"85baedd6-e513-4741-98cf-ef39cfda8ecb\") " Jan 30 21:47:49 crc kubenswrapper[4869]: I0130 21:47:49.445619 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdk4s\" (UniqueName: \"kubernetes.io/projected/85baedd6-e513-4741-98cf-ef39cfda8ecb-kube-api-access-jdk4s\") pod \"85baedd6-e513-4741-98cf-ef39cfda8ecb\" (UID: \"85baedd6-e513-4741-98cf-ef39cfda8ecb\") " Jan 30 21:47:49 crc kubenswrapper[4869]: I0130 21:47:49.447933 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85baedd6-e513-4741-98cf-ef39cfda8ecb-utilities" (OuterVolumeSpecName: "utilities") pod "85baedd6-e513-4741-98cf-ef39cfda8ecb" (UID: "85baedd6-e513-4741-98cf-ef39cfda8ecb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:47:49 crc kubenswrapper[4869]: I0130 21:47:49.451026 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85baedd6-e513-4741-98cf-ef39cfda8ecb-kube-api-access-jdk4s" (OuterVolumeSpecName: "kube-api-access-jdk4s") pod "85baedd6-e513-4741-98cf-ef39cfda8ecb" (UID: "85baedd6-e513-4741-98cf-ef39cfda8ecb"). InnerVolumeSpecName "kube-api-access-jdk4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:47:49 crc kubenswrapper[4869]: I0130 21:47:49.546929 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85baedd6-e513-4741-98cf-ef39cfda8ecb-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:49 crc kubenswrapper[4869]: I0130 21:47:49.546973 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdk4s\" (UniqueName: \"kubernetes.io/projected/85baedd6-e513-4741-98cf-ef39cfda8ecb-kube-api-access-jdk4s\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:49 crc kubenswrapper[4869]: I0130 21:47:49.722346 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5fzr" event={"ID":"85baedd6-e513-4741-98cf-ef39cfda8ecb","Type":"ContainerDied","Data":"7383834f164503e0a83b8fe1eb186a4df9ca6c5564360e49952b523d2aab5a0e"} Jan 30 21:47:49 crc kubenswrapper[4869]: I0130 21:47:49.722403 4869 scope.go:117] "RemoveContainer" containerID="ea217b3e5b2bb79b36d6a5337f907054956205f7ba5b78eea834e3d9c909f649" Jan 30 21:47:49 crc kubenswrapper[4869]: I0130 21:47:49.722522 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w5fzr" Jan 30 21:47:49 crc kubenswrapper[4869]: I0130 21:47:49.738306 4869 scope.go:117] "RemoveContainer" containerID="ce48d518b58917d54b5d9e0b357213d43914aecc73717585aed5d040e2aa2847" Jan 30 21:47:49 crc kubenswrapper[4869]: I0130 21:47:49.752988 4869 scope.go:117] "RemoveContainer" containerID="320f1241c0ca54543301c098bac50ae186cd97a4ad74a6748b4c88f66aff5d62" Jan 30 21:47:49 crc kubenswrapper[4869]: I0130 21:47:49.937327 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85baedd6-e513-4741-98cf-ef39cfda8ecb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85baedd6-e513-4741-98cf-ef39cfda8ecb" (UID: "85baedd6-e513-4741-98cf-ef39cfda8ecb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:47:49 crc kubenswrapper[4869]: I0130 21:47:49.953312 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85baedd6-e513-4741-98cf-ef39cfda8ecb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.060646 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w5fzr"] Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.065433 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w5fzr"] Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.855010 4869 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:47:50 crc kubenswrapper[4869]: E0130 21:47:50.855488 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85baedd6-e513-4741-98cf-ef39cfda8ecb" containerName="extract-content" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.855500 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="85baedd6-e513-4741-98cf-ef39cfda8ecb" containerName="extract-content" Jan 30 21:47:50 crc kubenswrapper[4869]: E0130 21:47:50.855533 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f93f920-68d0-41d1-8a20-ca174eda2fcd" containerName="extract-content" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.855542 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f93f920-68d0-41d1-8a20-ca174eda2fcd" containerName="extract-content" Jan 30 21:47:50 crc kubenswrapper[4869]: E0130 21:47:50.855559 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f93f920-68d0-41d1-8a20-ca174eda2fcd" containerName="registry-server" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.855565 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f93f920-68d0-41d1-8a20-ca174eda2fcd" containerName="registry-server" Jan 30 21:47:50 crc kubenswrapper[4869]: E0130 21:47:50.855575 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85baedd6-e513-4741-98cf-ef39cfda8ecb" containerName="extract-utilities" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.855583 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="85baedd6-e513-4741-98cf-ef39cfda8ecb" containerName="extract-utilities" Jan 30 21:47:50 crc kubenswrapper[4869]: E0130 21:47:50.855594 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85baedd6-e513-4741-98cf-ef39cfda8ecb" containerName="registry-server" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.855602 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="85baedd6-e513-4741-98cf-ef39cfda8ecb" containerName="registry-server" Jan 30 21:47:50 crc kubenswrapper[4869]: E0130 21:47:50.855619 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f93f920-68d0-41d1-8a20-ca174eda2fcd" containerName="extract-utilities" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.855626 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f93f920-68d0-41d1-8a20-ca174eda2fcd" containerName="extract-utilities" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.855731 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="85baedd6-e513-4741-98cf-ef39cfda8ecb" containerName="registry-server" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.855752 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f93f920-68d0-41d1-8a20-ca174eda2fcd" containerName="registry-server" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.856069 4869 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.856336 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.856463 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236" gracePeriod=15 Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.856516 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06" gracePeriod=15 Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.856528 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076" gracePeriod=15 Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.856633 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5" gracePeriod=15 Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.856678 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce" gracePeriod=15 Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.857006 4869 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:47:50 crc kubenswrapper[4869]: E0130 21:47:50.857121 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.857134 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 21:47:50 crc kubenswrapper[4869]: E0130 21:47:50.857142 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.857149 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:47:50 crc kubenswrapper[4869]: E0130 21:47:50.857158 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.857166 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 21:47:50 crc kubenswrapper[4869]: E0130 21:47:50.857175 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.857183 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:47:50 crc kubenswrapper[4869]: E0130 21:47:50.857192 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.857197 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 21:47:50 crc kubenswrapper[4869]: E0130 21:47:50.857208 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.857214 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 21:47:50 crc kubenswrapper[4869]: E0130 21:47:50.857222 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.857229 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.857328 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.857337 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.857344 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.857353 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.857360 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.857369 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 21:47:50 crc kubenswrapper[4869]: E0130 21:47:50.857454 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.857461 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.857555 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.863704 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.863751 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.863785 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.863973 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.864036 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.864070 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.864261 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.864343 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.902706 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.967610 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.967678 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.967708 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.967729 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.967751 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.967775 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.967856 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.967881 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.968011 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.968064 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.968094 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.968123 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.968163 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.968192 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.968223 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:47:50 crc kubenswrapper[4869]: I0130 21:47:50.968252 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.200344 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:47:51 crc kubenswrapper[4869]: E0130 21:47:51.223976 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.129:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fa084e54d1bb8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:47:51.223057336 +0000 UTC m=+272.108815361,LastTimestamp:2026-01-30 21:47:51.223057336 +0000 UTC m=+272.108815361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:47:51 crc kubenswrapper[4869]: E0130 21:47:51.598426 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:47:51Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:47:51Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:47:51Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:47:51Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:9bde862635f230b66b73aad05940f6cf2c0555a47fe1db330a20724acca8d497\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:db103f9b4d410efdd30da231ffebe8f093377e6c1e4064ddc68046925eb4627f\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1680805611},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:63fbea3b7080a0b403eaf16b3fed3ceda4cbba1fb0d71797d201d97e0745475c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:eecad2fc166355255907130f5b4a16ed876f792fe4420ae700dbc3741c3a382e\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202122991},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:84bdfaa1280b6132c66ed59de2078e0bd7672cde009357354bf028b9a1673a95\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d9b8bab836aa892d91fb35d5c17765fc6fa4b62c78de50c2a7d885c33cc5415d\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1187449074},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:51 crc kubenswrapper[4869]: E0130 21:47:51.599467 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:51 crc kubenswrapper[4869]: E0130 21:47:51.600077 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:51 crc kubenswrapper[4869]: E0130 21:47:51.600333 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:51 crc kubenswrapper[4869]: E0130 21:47:51.600637 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:51 crc kubenswrapper[4869]: E0130 21:47:51.600672 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.736146 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.737300 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.738071 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076" exitCode=0 Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.738102 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5" exitCode=0 Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.738115 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06" exitCode=0 Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.738125 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce" exitCode=2 Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.738162 4869 scope.go:117] "RemoveContainer" containerID="75cfaee59ce28051e9ddfcaf773dda21a6935ca1263eabc330c64bde87c996d3" Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.742879 4869 generic.go:334] "Generic (PLEG): container finished" podID="4efc27c1-fb04-4be9-88b2-25b7657400b7" containerID="d3f014043a3d991391363cd5b6ac8a8ae6e4017676fdb7f06c79c55e96db5b22" exitCode=0 Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.742956 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4efc27c1-fb04-4be9-88b2-25b7657400b7","Type":"ContainerDied","Data":"d3f014043a3d991391363cd5b6ac8a8ae6e4017676fdb7f06c79c55e96db5b22"} Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.743747 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.743878 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3bb392b8360a7afefd737a17a05005a90586eefb8be447f5c946c6f69792a0cf"} Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.743923 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"23da4d2d93862899278b57845dda5231d1ee0d42795079b8fc143263765bcf64"} Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.744147 4869 status_manager.go:851] "Failed to get status for pod" podUID="4efc27c1-fb04-4be9-88b2-25b7657400b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.744514 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.745061 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.745310 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.745533 4869 status_manager.go:851] "Failed to get status for pod" podUID="4efc27c1-fb04-4be9-88b2-25b7657400b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:51 crc kubenswrapper[4869]: I0130 21:47:51.885842 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85baedd6-e513-4741-98cf-ef39cfda8ecb" path="/var/lib/kubelet/pods/85baedd6-e513-4741-98cf-ef39cfda8ecb/volumes" Jan 30 21:47:52 crc kubenswrapper[4869]: E0130 21:47:52.406568 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.129:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fa084e54d1bb8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:47:51.223057336 +0000 UTC m=+272.108815361,LastTimestamp:2026-01-30 21:47:51.223057336 +0000 UTC m=+272.108815361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:47:52 crc kubenswrapper[4869]: I0130 21:47:52.759648 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.119103 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.120238 4869 status_manager.go:851] "Failed to get status for pod" podUID="4efc27c1-fb04-4be9-88b2-25b7657400b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.120577 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.309146 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4efc27c1-fb04-4be9-88b2-25b7657400b7-kube-api-access\") pod \"4efc27c1-fb04-4be9-88b2-25b7657400b7\" (UID: \"4efc27c1-fb04-4be9-88b2-25b7657400b7\") " Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.309226 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4efc27c1-fb04-4be9-88b2-25b7657400b7-kubelet-dir\") pod \"4efc27c1-fb04-4be9-88b2-25b7657400b7\" (UID: \"4efc27c1-fb04-4be9-88b2-25b7657400b7\") " Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.309273 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4efc27c1-fb04-4be9-88b2-25b7657400b7-var-lock\") pod \"4efc27c1-fb04-4be9-88b2-25b7657400b7\" (UID: \"4efc27c1-fb04-4be9-88b2-25b7657400b7\") " Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.309432 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4efc27c1-fb04-4be9-88b2-25b7657400b7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4efc27c1-fb04-4be9-88b2-25b7657400b7" (UID: "4efc27c1-fb04-4be9-88b2-25b7657400b7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.309475 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4efc27c1-fb04-4be9-88b2-25b7657400b7-var-lock" (OuterVolumeSpecName: "var-lock") pod "4efc27c1-fb04-4be9-88b2-25b7657400b7" (UID: "4efc27c1-fb04-4be9-88b2-25b7657400b7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.309596 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4efc27c1-fb04-4be9-88b2-25b7657400b7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.309611 4869 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4efc27c1-fb04-4be9-88b2-25b7657400b7-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.315912 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4efc27c1-fb04-4be9-88b2-25b7657400b7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4efc27c1-fb04-4be9-88b2-25b7657400b7" (UID: "4efc27c1-fb04-4be9-88b2-25b7657400b7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.410501 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4efc27c1-fb04-4be9-88b2-25b7657400b7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.767385 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4efc27c1-fb04-4be9-88b2-25b7657400b7","Type":"ContainerDied","Data":"4eecf7eb539d0e415c9e6ae80b0c01ea992e59fcb6673cd772da1276aced799a"} Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.767435 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eecf7eb539d0e415c9e6ae80b0c01ea992e59fcb6673cd772da1276aced799a" Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.767690 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.771016 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.771748 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236" exitCode=0 Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.783307 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:53 crc kubenswrapper[4869]: I0130 21:47:53.784108 4869 status_manager.go:851] "Failed to get status for pod" podUID="4efc27c1-fb04-4be9-88b2-25b7657400b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.013637 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.018678 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.019402 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.019690 4869 status_manager.go:851] "Failed to get status for pod" podUID="4efc27c1-fb04-4be9-88b2-25b7657400b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.020072 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.121101 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.121294 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.121759 4869 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.222324 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.222465 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.222919 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.222863 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.223418 4869 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.223436 4869 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.777637 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.778272 4869 scope.go:117] "RemoveContainer" containerID="d12f1427fc164e0f92a0ce90e116d3239ad1c56376666cea3ef92178d0988076" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.778396 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.792516 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.792875 4869 status_manager.go:851] "Failed to get status for pod" podUID="4efc27c1-fb04-4be9-88b2-25b7657400b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.793194 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.797902 4869 scope.go:117] "RemoveContainer" containerID="9563e1c219434c81376559138e931e8457d653060f639891e4a15b3a61bcf2d5" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.810756 4869 scope.go:117] "RemoveContainer" containerID="7f22a7faa65595d272f6cb952ac0054da3d3b6a0017dc004f52c65d823f1da06" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.825017 4869 scope.go:117] "RemoveContainer" containerID="7d705fd79e95205c59ebe341512286e7be25a663cdfe41ec50f0f57582c9f5ce" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.839663 4869 scope.go:117] "RemoveContainer" containerID="bb0a197870fc1ba15cbe9437452dfeac28e7670fe2383e71f5e29276082f7236" Jan 30 21:47:54 crc kubenswrapper[4869]: I0130 21:47:54.854261 4869 scope.go:117] "RemoveContainer" containerID="5ce7e452353fdf5154ceafdf874c9d879b37cb693cb3eb2c5be91a7d8a79f278" Jan 30 21:47:55 crc kubenswrapper[4869]: I0130 21:47:55.884784 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 21:47:59 crc kubenswrapper[4869]: I0130 21:47:59.881655 4869 status_manager.go:851] "Failed to get status for pod" podUID="4efc27c1-fb04-4be9-88b2-25b7657400b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:47:59 crc kubenswrapper[4869]: I0130 21:47:59.882343 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:48:00 crc kubenswrapper[4869]: E0130 21:48:00.513795 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:48:00 crc kubenswrapper[4869]: E0130 21:48:00.514351 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:48:00 crc kubenswrapper[4869]: E0130 21:48:00.514781 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:48:00 crc kubenswrapper[4869]: E0130 21:48:00.515073 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:48:00 crc kubenswrapper[4869]: E0130 21:48:00.515340 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:48:00 crc kubenswrapper[4869]: I0130 21:48:00.515368 4869 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 21:48:00 crc kubenswrapper[4869]: E0130 21:48:00.515639 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="200ms" Jan 30 21:48:00 crc kubenswrapper[4869]: E0130 21:48:00.717240 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="400ms" Jan 30 21:48:01 crc kubenswrapper[4869]: E0130 21:48:01.117982 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="800ms" Jan 30 21:48:01 crc kubenswrapper[4869]: E0130 21:48:01.919833 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="1.6s" Jan 30 21:48:01 crc kubenswrapper[4869]: E0130 21:48:01.999710 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:48:01Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:48:01Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:48:01Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:48:01Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:9bde862635f230b66b73aad05940f6cf2c0555a47fe1db330a20724acca8d497\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:db103f9b4d410efdd30da231ffebe8f093377e6c1e4064ddc68046925eb4627f\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1680805611},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:63fbea3b7080a0b403eaf16b3fed3ceda4cbba1fb0d71797d201d97e0745475c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:eecad2fc166355255907130f5b4a16ed876f792fe4420ae700dbc3741c3a382e\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202122991},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:84bdfaa1280b6132c66ed59de2078e0bd7672cde009357354bf028b9a1673a95\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d9b8bab836aa892d91fb35d5c17765fc6fa4b62c78de50c2a7d885c33cc5415d\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1187449074},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:48:02 crc kubenswrapper[4869]: E0130 21:48:02.000351 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:48:02 crc kubenswrapper[4869]: E0130 21:48:02.000595 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:48:02 crc kubenswrapper[4869]: E0130 21:48:02.000925 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:48:02 crc kubenswrapper[4869]: E0130 21:48:02.001210 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:48:02 crc kubenswrapper[4869]: E0130 21:48:02.001235 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:48:02 crc kubenswrapper[4869]: E0130 21:48:02.408348 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.129:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fa084e54d1bb8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:47:51.223057336 +0000 UTC m=+272.108815361,LastTimestamp:2026-01-30 21:47:51.223057336 +0000 UTC m=+272.108815361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:48:03 crc kubenswrapper[4869]: E0130 21:48:03.521326 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="3.2s" Jan 30 21:48:03 crc kubenswrapper[4869]: I0130 21:48:03.877064 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:48:03 crc kubenswrapper[4869]: I0130 21:48:03.877983 4869 status_manager.go:851] "Failed to get status for pod" podUID="4efc27c1-fb04-4be9-88b2-25b7657400b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:48:03 crc kubenswrapper[4869]: I0130 21:48:03.878303 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:48:03 crc kubenswrapper[4869]: I0130 21:48:03.891090 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6a48f728-068e-4f6e-9794-12d375245df1" Jan 30 21:48:03 crc kubenswrapper[4869]: I0130 21:48:03.891132 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6a48f728-068e-4f6e-9794-12d375245df1" Jan 30 21:48:03 crc kubenswrapper[4869]: E0130 21:48:03.891574 4869 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:48:03 crc kubenswrapper[4869]: I0130 21:48:03.892035 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:48:03 crc kubenswrapper[4869]: W0130 21:48:03.908145 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-69d0c16aced10e9b8da2e3c17ee63a2ca1d3679da14cd9d783a2544cf0386366 WatchSource:0}: Error finding container 69d0c16aced10e9b8da2e3c17ee63a2ca1d3679da14cd9d783a2544cf0386366: Status 404 returned error can't find the container with id 69d0c16aced10e9b8da2e3c17ee63a2ca1d3679da14cd9d783a2544cf0386366 Jan 30 21:48:04 crc kubenswrapper[4869]: I0130 21:48:04.852522 4869 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="c0e73110a22e616c4367ee3f092f0ebacf2bbfbc73d77ef13978fa8171a8e7bf" exitCode=0 Jan 30 21:48:04 crc kubenswrapper[4869]: I0130 21:48:04.852614 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"c0e73110a22e616c4367ee3f092f0ebacf2bbfbc73d77ef13978fa8171a8e7bf"} Jan 30 21:48:04 crc kubenswrapper[4869]: I0130 21:48:04.852715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"69d0c16aced10e9b8da2e3c17ee63a2ca1d3679da14cd9d783a2544cf0386366"} Jan 30 21:48:04 crc kubenswrapper[4869]: I0130 21:48:04.853008 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6a48f728-068e-4f6e-9794-12d375245df1" Jan 30 21:48:04 crc kubenswrapper[4869]: I0130 21:48:04.853021 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6a48f728-068e-4f6e-9794-12d375245df1" Jan 30 21:48:04 crc kubenswrapper[4869]: E0130 21:48:04.853423 4869 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:48:04 crc kubenswrapper[4869]: I0130 21:48:04.853422 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:48:04 crc kubenswrapper[4869]: I0130 21:48:04.853963 4869 status_manager.go:851] "Failed to get status for pod" podUID="4efc27c1-fb04-4be9-88b2-25b7657400b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Jan 30 21:48:05 crc kubenswrapper[4869]: I0130 21:48:05.862451 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 21:48:05 crc kubenswrapper[4869]: I0130 21:48:05.862515 4869 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b" exitCode=1 Jan 30 21:48:05 crc kubenswrapper[4869]: I0130 21:48:05.862592 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b"} Jan 30 21:48:05 crc kubenswrapper[4869]: I0130 21:48:05.863508 4869 scope.go:117] "RemoveContainer" containerID="f0c8909f295c8aea1aa2d6da1a709d1692b8fbdd6bb389d339d644655788328b" Jan 30 21:48:05 crc kubenswrapper[4869]: I0130 21:48:05.866412 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0508197f783723f5a70935ffd9c4d82580ff6f37dc20f423dffdf65e34d71e62"} Jan 30 21:48:06 crc kubenswrapper[4869]: I0130 21:48:06.872742 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"46cde34c6e2d7465b495ff97f42bccaf8e38ede634a734b57cf52b10589c53de"} Jan 30 21:48:06 crc kubenswrapper[4869]: I0130 21:48:06.875365 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 21:48:06 crc kubenswrapper[4869]: I0130 21:48:06.875416 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"de7593037726642732bd54c16367b8b01cda801f190f028e88b0b31b8cd5ff3d"} Jan 30 21:48:07 crc kubenswrapper[4869]: I0130 21:48:07.900045 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8ddecc3e4bfab0d4bbc610b36e77c227b9874d15740cccdb181a169a3375b5ff"} Jan 30 21:48:08 crc kubenswrapper[4869]: I0130 21:48:08.910200 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7cf3b7c2713f05e2ed7341dac60d14cc61a697beabece21ca9a9a14a345b766c"} Jan 30 21:48:08 crc kubenswrapper[4869]: I0130 21:48:08.910242 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a8032085379814088593f367843c4bbfb4849e59b36e00f35d92dbb375334aa5"} Jan 30 21:48:08 crc kubenswrapper[4869]: I0130 21:48:08.910406 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:48:08 crc kubenswrapper[4869]: I0130 21:48:08.910480 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6a48f728-068e-4f6e-9794-12d375245df1" Jan 30 21:48:08 crc kubenswrapper[4869]: I0130 21:48:08.910503 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6a48f728-068e-4f6e-9794-12d375245df1" Jan 30 21:48:08 crc kubenswrapper[4869]: I0130 21:48:08.918511 4869 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:48:09 crc kubenswrapper[4869]: I0130 21:48:09.796977 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:48:09 crc kubenswrapper[4869]: I0130 21:48:09.801373 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:48:09 crc kubenswrapper[4869]: I0130 21:48:09.914767 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:48:09 crc kubenswrapper[4869]: I0130 21:48:09.915090 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6a48f728-068e-4f6e-9794-12d375245df1" Jan 30 21:48:09 crc kubenswrapper[4869]: I0130 21:48:09.915479 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6a48f728-068e-4f6e-9794-12d375245df1" Jan 30 21:48:11 crc kubenswrapper[4869]: I0130 21:48:11.815020 4869 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7219e18c-194f-4f79-aa89-8fb8c4632407" Jan 30 21:48:19 crc kubenswrapper[4869]: I0130 21:48:19.667121 4869 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 21:48:21 crc kubenswrapper[4869]: I0130 21:48:21.914982 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 21:48:22 crc kubenswrapper[4869]: I0130 21:48:22.299258 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 21:48:22 crc kubenswrapper[4869]: I0130 21:48:22.557163 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 21:48:22 crc kubenswrapper[4869]: I0130 21:48:22.830823 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:48:22 crc kubenswrapper[4869]: I0130 21:48:22.867843 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 21:48:23 crc kubenswrapper[4869]: I0130 21:48:23.159480 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 21:48:23 crc kubenswrapper[4869]: I0130 21:48:23.333466 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 21:48:23 crc kubenswrapper[4869]: I0130 21:48:23.424211 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 21:48:23 crc kubenswrapper[4869]: I0130 21:48:23.552432 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:48:23 crc kubenswrapper[4869]: I0130 21:48:23.697581 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:48:24 crc kubenswrapper[4869]: I0130 21:48:24.033196 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 21:48:24 crc kubenswrapper[4869]: I0130 21:48:24.125991 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 21:48:24 crc kubenswrapper[4869]: I0130 21:48:24.356395 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 21:48:24 crc kubenswrapper[4869]: I0130 21:48:24.380039 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 21:48:24 crc kubenswrapper[4869]: I0130 21:48:24.412308 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 21:48:24 crc kubenswrapper[4869]: I0130 21:48:24.540648 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 21:48:24 crc kubenswrapper[4869]: I0130 21:48:24.543002 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 21:48:24 crc kubenswrapper[4869]: I0130 21:48:24.588086 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 21:48:24 crc kubenswrapper[4869]: I0130 21:48:24.631553 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 21:48:24 crc kubenswrapper[4869]: I0130 21:48:24.656487 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 21:48:24 crc kubenswrapper[4869]: I0130 21:48:24.994668 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 21:48:25 crc kubenswrapper[4869]: I0130 21:48:25.045845 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 21:48:25 crc kubenswrapper[4869]: I0130 21:48:25.275021 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 21:48:25 crc kubenswrapper[4869]: I0130 21:48:25.280813 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 21:48:25 crc kubenswrapper[4869]: I0130 21:48:25.314450 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:48:25 crc kubenswrapper[4869]: I0130 21:48:25.361702 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 21:48:25 crc kubenswrapper[4869]: I0130 21:48:25.367726 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 21:48:25 crc kubenswrapper[4869]: I0130 21:48:25.385039 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 21:48:25 crc kubenswrapper[4869]: I0130 21:48:25.529958 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 21:48:25 crc kubenswrapper[4869]: I0130 21:48:25.566959 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 21:48:25 crc kubenswrapper[4869]: I0130 21:48:25.648666 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:48:26 crc kubenswrapper[4869]: I0130 21:48:26.039012 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 21:48:26 crc kubenswrapper[4869]: I0130 21:48:26.100282 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 21:48:26 crc kubenswrapper[4869]: I0130 21:48:26.131211 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 21:48:26 crc kubenswrapper[4869]: I0130 21:48:26.162448 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 21:48:26 crc kubenswrapper[4869]: I0130 21:48:26.363237 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 21:48:26 crc kubenswrapper[4869]: I0130 21:48:26.489450 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 21:48:26 crc kubenswrapper[4869]: I0130 21:48:26.532601 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 21:48:26 crc kubenswrapper[4869]: I0130 21:48:26.581010 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 21:48:26 crc kubenswrapper[4869]: I0130 21:48:26.582525 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:48:26 crc kubenswrapper[4869]: I0130 21:48:26.608998 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 21:48:26 crc kubenswrapper[4869]: I0130 21:48:26.780010 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 21:48:26 crc kubenswrapper[4869]: I0130 21:48:26.781397 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 21:48:26 crc kubenswrapper[4869]: I0130 21:48:26.845221 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 21:48:26 crc kubenswrapper[4869]: I0130 21:48:26.941591 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 21:48:26 crc kubenswrapper[4869]: I0130 21:48:26.947147 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 21:48:26 crc kubenswrapper[4869]: I0130 21:48:26.979607 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.003496 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.025840 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.077053 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.091579 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.132267 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.132527 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.150303 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.209669 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.219984 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.408767 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.455185 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.469835 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.512110 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.514615 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.516980 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.537417 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.617840 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.620414 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.694109 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.697194 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.710451 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.760063 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.771654 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.776432 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.882086 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.946258 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.960987 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.991932 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 21:48:27 crc kubenswrapper[4869]: I0130 21:48:27.997774 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.038930 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.042990 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.056605 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.071863 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.098337 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.169595 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.238459 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.256923 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.256987 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.344971 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.375009 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.415575 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.500170 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.674150 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.735628 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.779491 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.929286 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.936878 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 21:48:28 crc kubenswrapper[4869]: I0130 21:48:28.955990 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 21:48:29 crc kubenswrapper[4869]: I0130 21:48:29.004633 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 21:48:29 crc kubenswrapper[4869]: I0130 21:48:29.212547 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 21:48:29 crc kubenswrapper[4869]: I0130 21:48:29.275728 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 21:48:29 crc kubenswrapper[4869]: I0130 21:48:29.365631 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 21:48:29 crc kubenswrapper[4869]: I0130 21:48:29.412170 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 21:48:29 crc kubenswrapper[4869]: I0130 21:48:29.428096 4869 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 21:48:29 crc kubenswrapper[4869]: I0130 21:48:29.468116 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 21:48:29 crc kubenswrapper[4869]: I0130 21:48:29.485346 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 21:48:29 crc kubenswrapper[4869]: I0130 21:48:29.571098 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 21:48:29 crc kubenswrapper[4869]: I0130 21:48:29.614485 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 21:48:29 crc kubenswrapper[4869]: I0130 21:48:29.624186 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 21:48:29 crc kubenswrapper[4869]: I0130 21:48:29.733655 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 21:48:29 crc kubenswrapper[4869]: I0130 21:48:29.757529 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 21:48:29 crc kubenswrapper[4869]: I0130 21:48:29.761740 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 21:48:29 crc kubenswrapper[4869]: I0130 21:48:29.889613 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 21:48:29 crc kubenswrapper[4869]: I0130 21:48:29.965611 4869 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 21:48:30 crc kubenswrapper[4869]: I0130 21:48:30.051312 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 21:48:30 crc kubenswrapper[4869]: I0130 21:48:30.115783 4869 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 21:48:30 crc kubenswrapper[4869]: I0130 21:48:30.190137 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 21:48:30 crc kubenswrapper[4869]: I0130 21:48:30.299179 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 21:48:30 crc kubenswrapper[4869]: I0130 21:48:30.326179 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 21:48:30 crc kubenswrapper[4869]: I0130 21:48:30.370208 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 21:48:30 crc kubenswrapper[4869]: I0130 21:48:30.400169 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 21:48:30 crc kubenswrapper[4869]: I0130 21:48:30.512513 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 21:48:30 crc kubenswrapper[4869]: I0130 21:48:30.586394 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 21:48:30 crc kubenswrapper[4869]: I0130 21:48:30.607539 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 21:48:30 crc kubenswrapper[4869]: I0130 21:48:30.643774 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 21:48:30 crc kubenswrapper[4869]: I0130 21:48:30.808667 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 21:48:30 crc kubenswrapper[4869]: I0130 21:48:30.824229 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 21:48:30 crc kubenswrapper[4869]: I0130 21:48:30.864838 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.109782 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.114614 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.123243 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.168622 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.255875 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.258456 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.370830 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.389201 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.411240 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.481595 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.527469 4869 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.531407 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.531389636 podStartE2EDuration="41.531389636s" podCreationTimestamp="2026-01-30 21:47:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:48:11.77243991 +0000 UTC m=+292.658197945" watchObservedRunningTime="2026-01-30 21:48:31.531389636 +0000 UTC m=+312.417147661" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.531879 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.531944 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.536198 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.548101 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.548085381 podStartE2EDuration="23.548085381s" podCreationTimestamp="2026-01-30 21:48:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:48:31.547263216 +0000 UTC m=+312.433021241" watchObservedRunningTime="2026-01-30 21:48:31.548085381 +0000 UTC m=+312.433843406" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.584690 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.593673 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.668554 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.712100 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.755140 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.784912 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.798284 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.814315 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.831965 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.845094 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 21:48:31 crc kubenswrapper[4869]: I0130 21:48:31.855481 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 21:48:32 crc kubenswrapper[4869]: I0130 21:48:32.127316 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 21:48:32 crc kubenswrapper[4869]: I0130 21:48:32.160602 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 21:48:32 crc kubenswrapper[4869]: I0130 21:48:32.185740 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 21:48:32 crc kubenswrapper[4869]: I0130 21:48:32.247090 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:48:32 crc kubenswrapper[4869]: I0130 21:48:32.249951 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 21:48:32 crc kubenswrapper[4869]: I0130 21:48:32.305888 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 21:48:32 crc kubenswrapper[4869]: I0130 21:48:32.468330 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 21:48:32 crc kubenswrapper[4869]: I0130 21:48:32.509798 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 21:48:32 crc kubenswrapper[4869]: I0130 21:48:32.522817 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 21:48:32 crc kubenswrapper[4869]: I0130 21:48:32.599113 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 21:48:32 crc kubenswrapper[4869]: I0130 21:48:32.706179 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 21:48:32 crc kubenswrapper[4869]: I0130 21:48:32.711020 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 21:48:32 crc kubenswrapper[4869]: I0130 21:48:32.830118 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:48:32 crc kubenswrapper[4869]: I0130 21:48:32.910273 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 21:48:32 crc kubenswrapper[4869]: I0130 21:48:32.962926 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 21:48:32 crc kubenswrapper[4869]: I0130 21:48:32.997776 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.056869 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.190201 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.194462 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.243112 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.243944 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.262213 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.311325 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.375972 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.427759 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.463625 4869 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.493267 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.505710 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.512403 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.629125 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.774992 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.797936 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.844751 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.892352 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.892538 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:48:33 crc kubenswrapper[4869]: I0130 21:48:33.896119 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.007676 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.043165 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.111005 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.168706 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.210200 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.255906 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.315398 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.355771 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.356377 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.374055 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.388748 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.463918 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.498860 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.503088 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.505638 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.640830 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.774272 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.777046 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.790647 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.817741 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.863764 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.874869 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.880497 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.902549 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.973515 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 21:48:34 crc kubenswrapper[4869]: I0130 21:48:34.993416 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.026609 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.060214 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.165445 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.170643 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.206091 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.225912 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.237108 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.262945 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.424409 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.518980 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.532446 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.552768 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.619400 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.675595 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.682187 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.733419 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.811201 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.813529 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 21:48:35 crc kubenswrapper[4869]: I0130 21:48:35.872471 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 21:48:36 crc kubenswrapper[4869]: I0130 21:48:36.059450 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 21:48:36 crc kubenswrapper[4869]: I0130 21:48:36.340603 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:48:36 crc kubenswrapper[4869]: I0130 21:48:36.389150 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 21:48:36 crc kubenswrapper[4869]: I0130 21:48:36.433267 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 21:48:36 crc kubenswrapper[4869]: I0130 21:48:36.446185 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 21:48:36 crc kubenswrapper[4869]: I0130 21:48:36.517404 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 21:48:36 crc kubenswrapper[4869]: I0130 21:48:36.570750 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 21:48:36 crc kubenswrapper[4869]: I0130 21:48:36.590441 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 21:48:36 crc kubenswrapper[4869]: I0130 21:48:36.603296 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 21:48:36 crc kubenswrapper[4869]: I0130 21:48:36.707128 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 21:48:36 crc kubenswrapper[4869]: I0130 21:48:36.761401 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 21:48:36 crc kubenswrapper[4869]: I0130 21:48:36.875265 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 21:48:36 crc kubenswrapper[4869]: I0130 21:48:36.898252 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 21:48:37 crc kubenswrapper[4869]: I0130 21:48:37.035989 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 21:48:37 crc kubenswrapper[4869]: I0130 21:48:37.197953 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 21:48:37 crc kubenswrapper[4869]: I0130 21:48:37.202409 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 21:48:37 crc kubenswrapper[4869]: I0130 21:48:37.211470 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 21:48:37 crc kubenswrapper[4869]: I0130 21:48:37.266538 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 21:48:37 crc kubenswrapper[4869]: I0130 21:48:37.285049 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 21:48:37 crc kubenswrapper[4869]: I0130 21:48:37.411993 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 21:48:37 crc kubenswrapper[4869]: I0130 21:48:37.496113 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 21:48:37 crc kubenswrapper[4869]: I0130 21:48:37.613685 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 21:48:37 crc kubenswrapper[4869]: I0130 21:48:37.694503 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 21:48:37 crc kubenswrapper[4869]: I0130 21:48:37.733511 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5"] Jan 30 21:48:37 crc kubenswrapper[4869]: I0130 21:48:37.734019 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" podUID="abd2a4d5-cfb7-4ae5-865c-e95650925df8" containerName="route-controller-manager" containerID="cri-o://09cdcde61ff427ce25759af098c243e3d8390adc84bb234dc2bf6549d307f86d" gracePeriod=30 Jan 30 21:48:37 crc kubenswrapper[4869]: I0130 21:48:37.743514 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c6bc75d49-flzfc"] Jan 30 21:48:37 crc kubenswrapper[4869]: I0130 21:48:37.744419 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" podUID="be261ab9-3f72-4696-8357-f2567b4f20f8" containerName="controller-manager" containerID="cri-o://502c817202ae2c832ce986979663b04d643ae05dd0110faabfb0836f61015f14" gracePeriod=30 Jan 30 21:48:37 crc kubenswrapper[4869]: I0130 21:48:37.881116 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.044304 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.819266 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.825605 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.903303 4869 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.978483 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be261ab9-3f72-4696-8357-f2567b4f20f8-serving-cert\") pod \"be261ab9-3f72-4696-8357-f2567b4f20f8\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.978539 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trsx2\" (UniqueName: \"kubernetes.io/projected/abd2a4d5-cfb7-4ae5-865c-e95650925df8-kube-api-access-trsx2\") pod \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\" (UID: \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\") " Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.978569 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be261ab9-3f72-4696-8357-f2567b4f20f8-proxy-ca-bundles\") pod \"be261ab9-3f72-4696-8357-f2567b4f20f8\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.978591 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abd2a4d5-cfb7-4ae5-865c-e95650925df8-config\") pod \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\" (UID: \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\") " Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.978646 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be261ab9-3f72-4696-8357-f2567b4f20f8-config\") pod \"be261ab9-3f72-4696-8357-f2567b4f20f8\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.978689 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be261ab9-3f72-4696-8357-f2567b4f20f8-client-ca\") pod \"be261ab9-3f72-4696-8357-f2567b4f20f8\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.978717 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjs2f\" (UniqueName: \"kubernetes.io/projected/be261ab9-3f72-4696-8357-f2567b4f20f8-kube-api-access-cjs2f\") pod \"be261ab9-3f72-4696-8357-f2567b4f20f8\" (UID: \"be261ab9-3f72-4696-8357-f2567b4f20f8\") " Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.978743 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abd2a4d5-cfb7-4ae5-865c-e95650925df8-client-ca\") pod \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\" (UID: \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\") " Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.978789 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abd2a4d5-cfb7-4ae5-865c-e95650925df8-serving-cert\") pod \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\" (UID: \"abd2a4d5-cfb7-4ae5-865c-e95650925df8\") " Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.979271 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be261ab9-3f72-4696-8357-f2567b4f20f8-client-ca" (OuterVolumeSpecName: "client-ca") pod "be261ab9-3f72-4696-8357-f2567b4f20f8" (UID: "be261ab9-3f72-4696-8357-f2567b4f20f8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.979363 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd2a4d5-cfb7-4ae5-865c-e95650925df8-client-ca" (OuterVolumeSpecName: "client-ca") pod "abd2a4d5-cfb7-4ae5-865c-e95650925df8" (UID: "abd2a4d5-cfb7-4ae5-865c-e95650925df8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.979385 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd2a4d5-cfb7-4ae5-865c-e95650925df8-config" (OuterVolumeSpecName: "config") pod "abd2a4d5-cfb7-4ae5-865c-e95650925df8" (UID: "abd2a4d5-cfb7-4ae5-865c-e95650925df8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.979459 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be261ab9-3f72-4696-8357-f2567b4f20f8-config" (OuterVolumeSpecName: "config") pod "be261ab9-3f72-4696-8357-f2567b4f20f8" (UID: "be261ab9-3f72-4696-8357-f2567b4f20f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.979691 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be261ab9-3f72-4696-8357-f2567b4f20f8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "be261ab9-3f72-4696-8357-f2567b4f20f8" (UID: "be261ab9-3f72-4696-8357-f2567b4f20f8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.980376 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be261ab9-3f72-4696-8357-f2567b4f20f8-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.980397 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be261ab9-3f72-4696-8357-f2567b4f20f8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.980408 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abd2a4d5-cfb7-4ae5-865c-e95650925df8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.980427 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be261ab9-3f72-4696-8357-f2567b4f20f8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.980444 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abd2a4d5-cfb7-4ae5-865c-e95650925df8-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.983447 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be261ab9-3f72-4696-8357-f2567b4f20f8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "be261ab9-3f72-4696-8357-f2567b4f20f8" (UID: "be261ab9-3f72-4696-8357-f2567b4f20f8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.983588 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd2a4d5-cfb7-4ae5-865c-e95650925df8-kube-api-access-trsx2" (OuterVolumeSpecName: "kube-api-access-trsx2") pod "abd2a4d5-cfb7-4ae5-865c-e95650925df8" (UID: "abd2a4d5-cfb7-4ae5-865c-e95650925df8"). InnerVolumeSpecName "kube-api-access-trsx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.983667 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be261ab9-3f72-4696-8357-f2567b4f20f8-kube-api-access-cjs2f" (OuterVolumeSpecName: "kube-api-access-cjs2f") pod "be261ab9-3f72-4696-8357-f2567b4f20f8" (UID: "be261ab9-3f72-4696-8357-f2567b4f20f8"). InnerVolumeSpecName "kube-api-access-cjs2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:48:38 crc kubenswrapper[4869]: I0130 21:48:38.983974 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd2a4d5-cfb7-4ae5-865c-e95650925df8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "abd2a4d5-cfb7-4ae5-865c-e95650925df8" (UID: "abd2a4d5-cfb7-4ae5-865c-e95650925df8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.064231 4869 generic.go:334] "Generic (PLEG): container finished" podID="abd2a4d5-cfb7-4ae5-865c-e95650925df8" containerID="09cdcde61ff427ce25759af098c243e3d8390adc84bb234dc2bf6549d307f86d" exitCode=0 Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.064300 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.064320 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" event={"ID":"abd2a4d5-cfb7-4ae5-865c-e95650925df8","Type":"ContainerDied","Data":"09cdcde61ff427ce25759af098c243e3d8390adc84bb234dc2bf6549d307f86d"} Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.064380 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5" event={"ID":"abd2a4d5-cfb7-4ae5-865c-e95650925df8","Type":"ContainerDied","Data":"f5c44b265ec9e9995f2668d217931298e21b0c2b9043d4055fb095a0a7967c98"} Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.064398 4869 scope.go:117] "RemoveContainer" containerID="09cdcde61ff427ce25759af098c243e3d8390adc84bb234dc2bf6549d307f86d" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.069652 4869 generic.go:334] "Generic (PLEG): container finished" podID="be261ab9-3f72-4696-8357-f2567b4f20f8" containerID="502c817202ae2c832ce986979663b04d643ae05dd0110faabfb0836f61015f14" exitCode=0 Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.069767 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" event={"ID":"be261ab9-3f72-4696-8357-f2567b4f20f8","Type":"ContainerDied","Data":"502c817202ae2c832ce986979663b04d643ae05dd0110faabfb0836f61015f14"} Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.069804 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" event={"ID":"be261ab9-3f72-4696-8357-f2567b4f20f8","Type":"ContainerDied","Data":"7accb92361236c9f30115595acd25d649f24534f05d2d1dad1d167cbfd61d417"} Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.069874 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c6bc75d49-flzfc" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.081266 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjs2f\" (UniqueName: \"kubernetes.io/projected/be261ab9-3f72-4696-8357-f2567b4f20f8-kube-api-access-cjs2f\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.081298 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abd2a4d5-cfb7-4ae5-865c-e95650925df8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.081309 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be261ab9-3f72-4696-8357-f2567b4f20f8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.081322 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trsx2\" (UniqueName: \"kubernetes.io/projected/abd2a4d5-cfb7-4ae5-865c-e95650925df8-kube-api-access-trsx2\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.092290 4869 scope.go:117] "RemoveContainer" containerID="09cdcde61ff427ce25759af098c243e3d8390adc84bb234dc2bf6549d307f86d" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.102094 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5"] Jan 30 21:48:39 crc kubenswrapper[4869]: E0130 21:48:39.104285 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09cdcde61ff427ce25759af098c243e3d8390adc84bb234dc2bf6549d307f86d\": container with ID starting with 09cdcde61ff427ce25759af098c243e3d8390adc84bb234dc2bf6549d307f86d not found: ID does not exist" containerID="09cdcde61ff427ce25759af098c243e3d8390adc84bb234dc2bf6549d307f86d" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.104341 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09cdcde61ff427ce25759af098c243e3d8390adc84bb234dc2bf6549d307f86d"} err="failed to get container status \"09cdcde61ff427ce25759af098c243e3d8390adc84bb234dc2bf6549d307f86d\": rpc error: code = NotFound desc = could not find container \"09cdcde61ff427ce25759af098c243e3d8390adc84bb234dc2bf6549d307f86d\": container with ID starting with 09cdcde61ff427ce25759af098c243e3d8390adc84bb234dc2bf6549d307f86d not found: ID does not exist" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.104376 4869 scope.go:117] "RemoveContainer" containerID="502c817202ae2c832ce986979663b04d643ae05dd0110faabfb0836f61015f14" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.108374 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fbc9cd55b-dqfs5"] Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.115827 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c6bc75d49-flzfc"] Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.118844 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c6bc75d49-flzfc"] Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.119192 4869 scope.go:117] "RemoveContainer" containerID="502c817202ae2c832ce986979663b04d643ae05dd0110faabfb0836f61015f14" Jan 30 21:48:39 crc kubenswrapper[4869]: E0130 21:48:39.119688 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"502c817202ae2c832ce986979663b04d643ae05dd0110faabfb0836f61015f14\": container with ID starting with 502c817202ae2c832ce986979663b04d643ae05dd0110faabfb0836f61015f14 not found: ID does not exist" containerID="502c817202ae2c832ce986979663b04d643ae05dd0110faabfb0836f61015f14" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.119741 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502c817202ae2c832ce986979663b04d643ae05dd0110faabfb0836f61015f14"} err="failed to get container status \"502c817202ae2c832ce986979663b04d643ae05dd0110faabfb0836f61015f14\": rpc error: code = NotFound desc = could not find container \"502c817202ae2c832ce986979663b04d643ae05dd0110faabfb0836f61015f14\": container with ID starting with 502c817202ae2c832ce986979663b04d643ae05dd0110faabfb0836f61015f14 not found: ID does not exist" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.472749 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.539778 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.618333 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92"] Jan 30 21:48:39 crc kubenswrapper[4869]: E0130 21:48:39.618511 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd2a4d5-cfb7-4ae5-865c-e95650925df8" containerName="route-controller-manager" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.618521 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd2a4d5-cfb7-4ae5-865c-e95650925df8" containerName="route-controller-manager" Jan 30 21:48:39 crc kubenswrapper[4869]: E0130 21:48:39.618534 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be261ab9-3f72-4696-8357-f2567b4f20f8" containerName="controller-manager" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.618540 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="be261ab9-3f72-4696-8357-f2567b4f20f8" containerName="controller-manager" Jan 30 21:48:39 crc kubenswrapper[4869]: E0130 21:48:39.618553 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4efc27c1-fb04-4be9-88b2-25b7657400b7" containerName="installer" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.618558 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4efc27c1-fb04-4be9-88b2-25b7657400b7" containerName="installer" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.618637 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="be261ab9-3f72-4696-8357-f2567b4f20f8" containerName="controller-manager" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.618655 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="abd2a4d5-cfb7-4ae5-865c-e95650925df8" containerName="route-controller-manager" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.618663 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4efc27c1-fb04-4be9-88b2-25b7657400b7" containerName="installer" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.619011 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.622995 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.623179 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.623370 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.623512 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.623643 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.624162 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.628516 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92"] Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.790512 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fwr6\" (UniqueName: \"kubernetes.io/projected/4ccff911-3d21-4cd0-8629-7e53f770f126-kube-api-access-8fwr6\") pod \"route-controller-manager-54599f87c8-vcz92\" (UID: \"4ccff911-3d21-4cd0-8629-7e53f770f126\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.790568 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ccff911-3d21-4cd0-8629-7e53f770f126-client-ca\") pod \"route-controller-manager-54599f87c8-vcz92\" (UID: \"4ccff911-3d21-4cd0-8629-7e53f770f126\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.790638 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ccff911-3d21-4cd0-8629-7e53f770f126-config\") pod \"route-controller-manager-54599f87c8-vcz92\" (UID: \"4ccff911-3d21-4cd0-8629-7e53f770f126\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.790662 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ccff911-3d21-4cd0-8629-7e53f770f126-serving-cert\") pod \"route-controller-manager-54599f87c8-vcz92\" (UID: \"4ccff911-3d21-4cd0-8629-7e53f770f126\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.885914 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abd2a4d5-cfb7-4ae5-865c-e95650925df8" path="/var/lib/kubelet/pods/abd2a4d5-cfb7-4ae5-865c-e95650925df8/volumes" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.886589 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be261ab9-3f72-4696-8357-f2567b4f20f8" path="/var/lib/kubelet/pods/be261ab9-3f72-4696-8357-f2567b4f20f8/volumes" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.891546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ccff911-3d21-4cd0-8629-7e53f770f126-serving-cert\") pod \"route-controller-manager-54599f87c8-vcz92\" (UID: \"4ccff911-3d21-4cd0-8629-7e53f770f126\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.891603 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fwr6\" (UniqueName: \"kubernetes.io/projected/4ccff911-3d21-4cd0-8629-7e53f770f126-kube-api-access-8fwr6\") pod \"route-controller-manager-54599f87c8-vcz92\" (UID: \"4ccff911-3d21-4cd0-8629-7e53f770f126\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.891630 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ccff911-3d21-4cd0-8629-7e53f770f126-client-ca\") pod \"route-controller-manager-54599f87c8-vcz92\" (UID: \"4ccff911-3d21-4cd0-8629-7e53f770f126\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.891687 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ccff911-3d21-4cd0-8629-7e53f770f126-config\") pod \"route-controller-manager-54599f87c8-vcz92\" (UID: \"4ccff911-3d21-4cd0-8629-7e53f770f126\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.892968 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ccff911-3d21-4cd0-8629-7e53f770f126-config\") pod \"route-controller-manager-54599f87c8-vcz92\" (UID: \"4ccff911-3d21-4cd0-8629-7e53f770f126\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.893380 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ccff911-3d21-4cd0-8629-7e53f770f126-client-ca\") pod \"route-controller-manager-54599f87c8-vcz92\" (UID: \"4ccff911-3d21-4cd0-8629-7e53f770f126\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.896656 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ccff911-3d21-4cd0-8629-7e53f770f126-serving-cert\") pod \"route-controller-manager-54599f87c8-vcz92\" (UID: \"4ccff911-3d21-4cd0-8629-7e53f770f126\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.913916 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fwr6\" (UniqueName: \"kubernetes.io/projected/4ccff911-3d21-4cd0-8629-7e53f770f126-kube-api-access-8fwr6\") pod \"route-controller-manager-54599f87c8-vcz92\" (UID: \"4ccff911-3d21-4cd0-8629-7e53f770f126\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.917221 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 21:48:39 crc kubenswrapper[4869]: I0130 21:48:39.934143 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" Jan 30 21:48:40 crc kubenswrapper[4869]: I0130 21:48:40.367869 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92"] Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.084852 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" event={"ID":"4ccff911-3d21-4cd0-8629-7e53f770f126","Type":"ContainerStarted","Data":"e2f7b483cd5c15c81594a9db1ca4164a2531d553f066654d4bf5b2b6a529ae59"} Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.084929 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" event={"ID":"4ccff911-3d21-4cd0-8629-7e53f770f126","Type":"ContainerStarted","Data":"dd68864795a44993c3ed73cf840e12fd548645cd5f97eccfd92e912d94195ed6"} Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.085135 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.089867 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.101578 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" podStartSLOduration=4.10154696 podStartE2EDuration="4.10154696s" podCreationTimestamp="2026-01-30 21:48:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:48:41.100536817 +0000 UTC m=+321.986294852" watchObservedRunningTime="2026-01-30 21:48:41.10154696 +0000 UTC m=+321.987305005" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.617249 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh"] Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.617883 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.619550 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.619655 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.619712 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.620394 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.620616 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.620760 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.628272 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh"] Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.636203 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.710452 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrhcr\" (UniqueName: \"kubernetes.io/projected/5172795a-9886-4a3d-8199-04cfcefc1c0f-kube-api-access-hrhcr\") pod \"controller-manager-6cd599f9ff-5qdgh\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.710810 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5172795a-9886-4a3d-8199-04cfcefc1c0f-config\") pod \"controller-manager-6cd599f9ff-5qdgh\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.710870 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5172795a-9886-4a3d-8199-04cfcefc1c0f-proxy-ca-bundles\") pod \"controller-manager-6cd599f9ff-5qdgh\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.710934 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5172795a-9886-4a3d-8199-04cfcefc1c0f-serving-cert\") pod \"controller-manager-6cd599f9ff-5qdgh\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.711044 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5172795a-9886-4a3d-8199-04cfcefc1c0f-client-ca\") pod \"controller-manager-6cd599f9ff-5qdgh\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.812758 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5172795a-9886-4a3d-8199-04cfcefc1c0f-config\") pod \"controller-manager-6cd599f9ff-5qdgh\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.813317 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5172795a-9886-4a3d-8199-04cfcefc1c0f-proxy-ca-bundles\") pod \"controller-manager-6cd599f9ff-5qdgh\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.813345 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5172795a-9886-4a3d-8199-04cfcefc1c0f-serving-cert\") pod \"controller-manager-6cd599f9ff-5qdgh\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.813372 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5172795a-9886-4a3d-8199-04cfcefc1c0f-client-ca\") pod \"controller-manager-6cd599f9ff-5qdgh\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.813402 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrhcr\" (UniqueName: \"kubernetes.io/projected/5172795a-9886-4a3d-8199-04cfcefc1c0f-kube-api-access-hrhcr\") pod \"controller-manager-6cd599f9ff-5qdgh\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.814571 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5172795a-9886-4a3d-8199-04cfcefc1c0f-config\") pod \"controller-manager-6cd599f9ff-5qdgh\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.816009 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5172795a-9886-4a3d-8199-04cfcefc1c0f-proxy-ca-bundles\") pod \"controller-manager-6cd599f9ff-5qdgh\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.816130 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5172795a-9886-4a3d-8199-04cfcefc1c0f-client-ca\") pod \"controller-manager-6cd599f9ff-5qdgh\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.821320 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5172795a-9886-4a3d-8199-04cfcefc1c0f-serving-cert\") pod \"controller-manager-6cd599f9ff-5qdgh\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.832948 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrhcr\" (UniqueName: \"kubernetes.io/projected/5172795a-9886-4a3d-8199-04cfcefc1c0f-kube-api-access-hrhcr\") pod \"controller-manager-6cd599f9ff-5qdgh\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:41 crc kubenswrapper[4869]: I0130 21:48:41.979798 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:42 crc kubenswrapper[4869]: I0130 21:48:42.366257 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh"] Jan 30 21:48:42 crc kubenswrapper[4869]: W0130 21:48:42.371973 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5172795a_9886_4a3d_8199_04cfcefc1c0f.slice/crio-918cf5ab7236c617cb65e5d4627fd91011d0bbd699eb78969db077c3928eef98 WatchSource:0}: Error finding container 918cf5ab7236c617cb65e5d4627fd91011d0bbd699eb78969db077c3928eef98: Status 404 returned error can't find the container with id 918cf5ab7236c617cb65e5d4627fd91011d0bbd699eb78969db077c3928eef98 Jan 30 21:48:43 crc kubenswrapper[4869]: I0130 21:48:43.096130 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" event={"ID":"5172795a-9886-4a3d-8199-04cfcefc1c0f","Type":"ContainerStarted","Data":"65221cf2877e6712c716448544bd40124ee10b5f755ad2b0e07a41830c30fcf7"} Jan 30 21:48:43 crc kubenswrapper[4869]: I0130 21:48:43.096202 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" event={"ID":"5172795a-9886-4a3d-8199-04cfcefc1c0f","Type":"ContainerStarted","Data":"918cf5ab7236c617cb65e5d4627fd91011d0bbd699eb78969db077c3928eef98"} Jan 30 21:48:43 crc kubenswrapper[4869]: I0130 21:48:43.096236 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:43 crc kubenswrapper[4869]: I0130 21:48:43.105091 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:43 crc kubenswrapper[4869]: I0130 21:48:43.121220 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" podStartSLOduration=6.121198408 podStartE2EDuration="6.121198408s" podCreationTimestamp="2026-01-30 21:48:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:48:43.115542257 +0000 UTC m=+324.001300282" watchObservedRunningTime="2026-01-30 21:48:43.121198408 +0000 UTC m=+324.006956423" Jan 30 21:48:44 crc kubenswrapper[4869]: I0130 21:48:44.473351 4869 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:48:44 crc kubenswrapper[4869]: I0130 21:48:44.473871 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://3bb392b8360a7afefd737a17a05005a90586eefb8be447f5c946c6f69792a0cf" gracePeriod=5 Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.059777 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.060430 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.128274 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.128344 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.128364 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.128410 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.128422 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.128502 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.128606 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.128644 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.128790 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.129109 4869 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.129130 4869 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.129162 4869 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.129177 4869 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.142730 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.145400 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.145471 4869 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="3bb392b8360a7afefd737a17a05005a90586eefb8be447f5c946c6f69792a0cf" exitCode=137 Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.145529 4869 scope.go:117] "RemoveContainer" containerID="3bb392b8360a7afefd737a17a05005a90586eefb8be447f5c946c6f69792a0cf" Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.145568 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.184120 4869 scope.go:117] "RemoveContainer" containerID="3bb392b8360a7afefd737a17a05005a90586eefb8be447f5c946c6f69792a0cf" Jan 30 21:48:50 crc kubenswrapper[4869]: E0130 21:48:50.184628 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bb392b8360a7afefd737a17a05005a90586eefb8be447f5c946c6f69792a0cf\": container with ID starting with 3bb392b8360a7afefd737a17a05005a90586eefb8be447f5c946c6f69792a0cf not found: ID does not exist" containerID="3bb392b8360a7afefd737a17a05005a90586eefb8be447f5c946c6f69792a0cf" Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.184715 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bb392b8360a7afefd737a17a05005a90586eefb8be447f5c946c6f69792a0cf"} err="failed to get container status \"3bb392b8360a7afefd737a17a05005a90586eefb8be447f5c946c6f69792a0cf\": rpc error: code = NotFound desc = could not find container \"3bb392b8360a7afefd737a17a05005a90586eefb8be447f5c946c6f69792a0cf\": container with ID starting with 3bb392b8360a7afefd737a17a05005a90586eefb8be447f5c946c6f69792a0cf not found: ID does not exist" Jan 30 21:48:50 crc kubenswrapper[4869]: I0130 21:48:50.231073 4869 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:51 crc kubenswrapper[4869]: I0130 21:48:51.884840 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 21:48:51 crc kubenswrapper[4869]: I0130 21:48:51.885770 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 30 21:48:51 crc kubenswrapper[4869]: I0130 21:48:51.896080 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:48:51 crc kubenswrapper[4869]: I0130 21:48:51.896307 4869 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4809ce88-d262-408f-98b2-cb6e1093d6f3" Jan 30 21:48:51 crc kubenswrapper[4869]: I0130 21:48:51.899201 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:48:51 crc kubenswrapper[4869]: I0130 21:48:51.899238 4869 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4809ce88-d262-408f-98b2-cb6e1093d6f3" Jan 30 21:48:56 crc kubenswrapper[4869]: I0130 21:48:56.982775 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fgmnt"] Jan 30 21:48:57 crc kubenswrapper[4869]: I0130 21:48:57.659615 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh"] Jan 30 21:48:57 crc kubenswrapper[4869]: I0130 21:48:57.659843 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" podUID="5172795a-9886-4a3d-8199-04cfcefc1c0f" containerName="controller-manager" containerID="cri-o://65221cf2877e6712c716448544bd40124ee10b5f755ad2b0e07a41830c30fcf7" gracePeriod=30 Jan 30 21:48:57 crc kubenswrapper[4869]: I0130 21:48:57.682150 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92"] Jan 30 21:48:57 crc kubenswrapper[4869]: I0130 21:48:57.682591 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" podUID="4ccff911-3d21-4cd0-8629-7e53f770f126" containerName="route-controller-manager" containerID="cri-o://e2f7b483cd5c15c81594a9db1ca4164a2531d553f066654d4bf5b2b6a529ae59" gracePeriod=30 Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.171225 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.190585 4869 generic.go:334] "Generic (PLEG): container finished" podID="5172795a-9886-4a3d-8199-04cfcefc1c0f" containerID="65221cf2877e6712c716448544bd40124ee10b5f755ad2b0e07a41830c30fcf7" exitCode=0 Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.190676 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" event={"ID":"5172795a-9886-4a3d-8199-04cfcefc1c0f","Type":"ContainerDied","Data":"65221cf2877e6712c716448544bd40124ee10b5f755ad2b0e07a41830c30fcf7"} Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.192808 4869 generic.go:334] "Generic (PLEG): container finished" podID="4ccff911-3d21-4cd0-8629-7e53f770f126" containerID="e2f7b483cd5c15c81594a9db1ca4164a2531d553f066654d4bf5b2b6a529ae59" exitCode=0 Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.192875 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" event={"ID":"4ccff911-3d21-4cd0-8629-7e53f770f126","Type":"ContainerDied","Data":"e2f7b483cd5c15c81594a9db1ca4164a2531d553f066654d4bf5b2b6a529ae59"} Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.192927 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" event={"ID":"4ccff911-3d21-4cd0-8629-7e53f770f126","Type":"ContainerDied","Data":"dd68864795a44993c3ed73cf840e12fd548645cd5f97eccfd92e912d94195ed6"} Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.192949 4869 scope.go:117] "RemoveContainer" containerID="e2f7b483cd5c15c81594a9db1ca4164a2531d553f066654d4bf5b2b6a529ae59" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.193150 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.213420 4869 scope.go:117] "RemoveContainer" containerID="e2f7b483cd5c15c81594a9db1ca4164a2531d553f066654d4bf5b2b6a529ae59" Jan 30 21:48:58 crc kubenswrapper[4869]: E0130 21:48:58.214021 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2f7b483cd5c15c81594a9db1ca4164a2531d553f066654d4bf5b2b6a529ae59\": container with ID starting with e2f7b483cd5c15c81594a9db1ca4164a2531d553f066654d4bf5b2b6a529ae59 not found: ID does not exist" containerID="e2f7b483cd5c15c81594a9db1ca4164a2531d553f066654d4bf5b2b6a529ae59" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.214097 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2f7b483cd5c15c81594a9db1ca4164a2531d553f066654d4bf5b2b6a529ae59"} err="failed to get container status \"e2f7b483cd5c15c81594a9db1ca4164a2531d553f066654d4bf5b2b6a529ae59\": rpc error: code = NotFound desc = could not find container \"e2f7b483cd5c15c81594a9db1ca4164a2531d553f066654d4bf5b2b6a529ae59\": container with ID starting with e2f7b483cd5c15c81594a9db1ca4164a2531d553f066654d4bf5b2b6a529ae59 not found: ID does not exist" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.231635 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ccff911-3d21-4cd0-8629-7e53f770f126-client-ca\") pod \"4ccff911-3d21-4cd0-8629-7e53f770f126\" (UID: \"4ccff911-3d21-4cd0-8629-7e53f770f126\") " Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.231692 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ccff911-3d21-4cd0-8629-7e53f770f126-serving-cert\") pod \"4ccff911-3d21-4cd0-8629-7e53f770f126\" (UID: \"4ccff911-3d21-4cd0-8629-7e53f770f126\") " Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.231759 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ccff911-3d21-4cd0-8629-7e53f770f126-config\") pod \"4ccff911-3d21-4cd0-8629-7e53f770f126\" (UID: \"4ccff911-3d21-4cd0-8629-7e53f770f126\") " Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.231808 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fwr6\" (UniqueName: \"kubernetes.io/projected/4ccff911-3d21-4cd0-8629-7e53f770f126-kube-api-access-8fwr6\") pod \"4ccff911-3d21-4cd0-8629-7e53f770f126\" (UID: \"4ccff911-3d21-4cd0-8629-7e53f770f126\") " Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.233384 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ccff911-3d21-4cd0-8629-7e53f770f126-config" (OuterVolumeSpecName: "config") pod "4ccff911-3d21-4cd0-8629-7e53f770f126" (UID: "4ccff911-3d21-4cd0-8629-7e53f770f126"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.233910 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ccff911-3d21-4cd0-8629-7e53f770f126-client-ca" (OuterVolumeSpecName: "client-ca") pod "4ccff911-3d21-4cd0-8629-7e53f770f126" (UID: "4ccff911-3d21-4cd0-8629-7e53f770f126"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.245086 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ccff911-3d21-4cd0-8629-7e53f770f126-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4ccff911-3d21-4cd0-8629-7e53f770f126" (UID: "4ccff911-3d21-4cd0-8629-7e53f770f126"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.260214 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ccff911-3d21-4cd0-8629-7e53f770f126-kube-api-access-8fwr6" (OuterVolumeSpecName: "kube-api-access-8fwr6") pod "4ccff911-3d21-4cd0-8629-7e53f770f126" (UID: "4ccff911-3d21-4cd0-8629-7e53f770f126"). InnerVolumeSpecName "kube-api-access-8fwr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.333710 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ccff911-3d21-4cd0-8629-7e53f770f126-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.333742 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ccff911-3d21-4cd0-8629-7e53f770f126-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.333752 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ccff911-3d21-4cd0-8629-7e53f770f126-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.333761 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fwr6\" (UniqueName: \"kubernetes.io/projected/4ccff911-3d21-4cd0-8629-7e53f770f126-kube-api-access-8fwr6\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.524094 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92"] Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.528738 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54599f87c8-vcz92"] Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.756845 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.838634 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5172795a-9886-4a3d-8199-04cfcefc1c0f-config\") pod \"5172795a-9886-4a3d-8199-04cfcefc1c0f\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.839581 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5172795a-9886-4a3d-8199-04cfcefc1c0f-config" (OuterVolumeSpecName: "config") pod "5172795a-9886-4a3d-8199-04cfcefc1c0f" (UID: "5172795a-9886-4a3d-8199-04cfcefc1c0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.839737 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrhcr\" (UniqueName: \"kubernetes.io/projected/5172795a-9886-4a3d-8199-04cfcefc1c0f-kube-api-access-hrhcr\") pod \"5172795a-9886-4a3d-8199-04cfcefc1c0f\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.840537 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5172795a-9886-4a3d-8199-04cfcefc1c0f-client-ca\") pod \"5172795a-9886-4a3d-8199-04cfcefc1c0f\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.840611 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5172795a-9886-4a3d-8199-04cfcefc1c0f-serving-cert\") pod \"5172795a-9886-4a3d-8199-04cfcefc1c0f\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.840641 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5172795a-9886-4a3d-8199-04cfcefc1c0f-proxy-ca-bundles\") pod \"5172795a-9886-4a3d-8199-04cfcefc1c0f\" (UID: \"5172795a-9886-4a3d-8199-04cfcefc1c0f\") " Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.841070 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5172795a-9886-4a3d-8199-04cfcefc1c0f-client-ca" (OuterVolumeSpecName: "client-ca") pod "5172795a-9886-4a3d-8199-04cfcefc1c0f" (UID: "5172795a-9886-4a3d-8199-04cfcefc1c0f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.841090 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5172795a-9886-4a3d-8199-04cfcefc1c0f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5172795a-9886-4a3d-8199-04cfcefc1c0f" (UID: "5172795a-9886-4a3d-8199-04cfcefc1c0f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.841387 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5172795a-9886-4a3d-8199-04cfcefc1c0f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.841409 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5172795a-9886-4a3d-8199-04cfcefc1c0f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.841423 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5172795a-9886-4a3d-8199-04cfcefc1c0f-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.844133 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5172795a-9886-4a3d-8199-04cfcefc1c0f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5172795a-9886-4a3d-8199-04cfcefc1c0f" (UID: "5172795a-9886-4a3d-8199-04cfcefc1c0f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.844244 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5172795a-9886-4a3d-8199-04cfcefc1c0f-kube-api-access-hrhcr" (OuterVolumeSpecName: "kube-api-access-hrhcr") pod "5172795a-9886-4a3d-8199-04cfcefc1c0f" (UID: "5172795a-9886-4a3d-8199-04cfcefc1c0f"). InnerVolumeSpecName "kube-api-access-hrhcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.942953 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrhcr\" (UniqueName: \"kubernetes.io/projected/5172795a-9886-4a3d-8199-04cfcefc1c0f-kube-api-access-hrhcr\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:58 crc kubenswrapper[4869]: I0130 21:48:58.942997 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5172795a-9886-4a3d-8199-04cfcefc1c0f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.199531 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" event={"ID":"5172795a-9886-4a3d-8199-04cfcefc1c0f","Type":"ContainerDied","Data":"918cf5ab7236c617cb65e5d4627fd91011d0bbd699eb78969db077c3928eef98"} Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.199582 4869 scope.go:117] "RemoveContainer" containerID="65221cf2877e6712c716448544bd40124ee10b5f755ad2b0e07a41830c30fcf7" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.199616 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.225815 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh"] Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.234446 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6cd599f9ff-5qdgh"] Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.632124 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl"] Jan 30 21:48:59 crc kubenswrapper[4869]: E0130 21:48:59.632575 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5172795a-9886-4a3d-8199-04cfcefc1c0f" containerName="controller-manager" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.632588 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5172795a-9886-4a3d-8199-04cfcefc1c0f" containerName="controller-manager" Jan 30 21:48:59 crc kubenswrapper[4869]: E0130 21:48:59.632600 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.632607 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 21:48:59 crc kubenswrapper[4869]: E0130 21:48:59.632622 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ccff911-3d21-4cd0-8629-7e53f770f126" containerName="route-controller-manager" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.632628 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ccff911-3d21-4cd0-8629-7e53f770f126" containerName="route-controller-manager" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.632709 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.632718 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5172795a-9886-4a3d-8199-04cfcefc1c0f" containerName="controller-manager" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.632730 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ccff911-3d21-4cd0-8629-7e53f770f126" containerName="route-controller-manager" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.633148 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.634268 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-856f74b64f-xclmx"] Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.635184 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.636359 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.636397 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.636437 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.636490 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.636796 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.637931 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-856f74b64f-xclmx"] Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.640318 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.640924 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl"] Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.641158 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.641310 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.641387 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.641422 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.641553 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.642496 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.646618 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.754980 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be774e7b-3a10-48ad-84fe-6c6d880fc454-client-ca\") pod \"controller-manager-856f74b64f-xclmx\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.755057 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be774e7b-3a10-48ad-84fe-6c6d880fc454-config\") pod \"controller-manager-856f74b64f-xclmx\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.755088 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tzj2\" (UniqueName: \"kubernetes.io/projected/f2687bbe-78dd-4ad2-8180-8bd4698db21e-kube-api-access-8tzj2\") pod \"route-controller-manager-67cc8d88b-rgfvl\" (UID: \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.755176 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsrg2\" (UniqueName: \"kubernetes.io/projected/be774e7b-3a10-48ad-84fe-6c6d880fc454-kube-api-access-vsrg2\") pod \"controller-manager-856f74b64f-xclmx\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.755203 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2687bbe-78dd-4ad2-8180-8bd4698db21e-serving-cert\") pod \"route-controller-manager-67cc8d88b-rgfvl\" (UID: \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.755278 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be774e7b-3a10-48ad-84fe-6c6d880fc454-proxy-ca-bundles\") pod \"controller-manager-856f74b64f-xclmx\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.755306 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2687bbe-78dd-4ad2-8180-8bd4698db21e-config\") pod \"route-controller-manager-67cc8d88b-rgfvl\" (UID: \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.755329 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be774e7b-3a10-48ad-84fe-6c6d880fc454-serving-cert\") pod \"controller-manager-856f74b64f-xclmx\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.755352 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2687bbe-78dd-4ad2-8180-8bd4698db21e-client-ca\") pod \"route-controller-manager-67cc8d88b-rgfvl\" (UID: \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.856404 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsrg2\" (UniqueName: \"kubernetes.io/projected/be774e7b-3a10-48ad-84fe-6c6d880fc454-kube-api-access-vsrg2\") pod \"controller-manager-856f74b64f-xclmx\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.856453 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2687bbe-78dd-4ad2-8180-8bd4698db21e-serving-cert\") pod \"route-controller-manager-67cc8d88b-rgfvl\" (UID: \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.856476 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be774e7b-3a10-48ad-84fe-6c6d880fc454-proxy-ca-bundles\") pod \"controller-manager-856f74b64f-xclmx\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.856496 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2687bbe-78dd-4ad2-8180-8bd4698db21e-config\") pod \"route-controller-manager-67cc8d88b-rgfvl\" (UID: \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.856522 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be774e7b-3a10-48ad-84fe-6c6d880fc454-serving-cert\") pod \"controller-manager-856f74b64f-xclmx\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.856550 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2687bbe-78dd-4ad2-8180-8bd4698db21e-client-ca\") pod \"route-controller-manager-67cc8d88b-rgfvl\" (UID: \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.856595 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be774e7b-3a10-48ad-84fe-6c6d880fc454-client-ca\") pod \"controller-manager-856f74b64f-xclmx\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.856629 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be774e7b-3a10-48ad-84fe-6c6d880fc454-config\") pod \"controller-manager-856f74b64f-xclmx\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.856650 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tzj2\" (UniqueName: \"kubernetes.io/projected/f2687bbe-78dd-4ad2-8180-8bd4698db21e-kube-api-access-8tzj2\") pod \"route-controller-manager-67cc8d88b-rgfvl\" (UID: \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.857737 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be774e7b-3a10-48ad-84fe-6c6d880fc454-client-ca\") pod \"controller-manager-856f74b64f-xclmx\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.857974 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be774e7b-3a10-48ad-84fe-6c6d880fc454-proxy-ca-bundles\") pod \"controller-manager-856f74b64f-xclmx\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.858127 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2687bbe-78dd-4ad2-8180-8bd4698db21e-config\") pod \"route-controller-manager-67cc8d88b-rgfvl\" (UID: \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.858169 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be774e7b-3a10-48ad-84fe-6c6d880fc454-config\") pod \"controller-manager-856f74b64f-xclmx\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.859006 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2687bbe-78dd-4ad2-8180-8bd4698db21e-client-ca\") pod \"route-controller-manager-67cc8d88b-rgfvl\" (UID: \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.862647 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be774e7b-3a10-48ad-84fe-6c6d880fc454-serving-cert\") pod \"controller-manager-856f74b64f-xclmx\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.863369 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2687bbe-78dd-4ad2-8180-8bd4698db21e-serving-cert\") pod \"route-controller-manager-67cc8d88b-rgfvl\" (UID: \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.875380 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsrg2\" (UniqueName: \"kubernetes.io/projected/be774e7b-3a10-48ad-84fe-6c6d880fc454-kube-api-access-vsrg2\") pod \"controller-manager-856f74b64f-xclmx\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.882685 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tzj2\" (UniqueName: \"kubernetes.io/projected/f2687bbe-78dd-4ad2-8180-8bd4698db21e-kube-api-access-8tzj2\") pod \"route-controller-manager-67cc8d88b-rgfvl\" (UID: \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.884323 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ccff911-3d21-4cd0-8629-7e53f770f126" path="/var/lib/kubelet/pods/4ccff911-3d21-4cd0-8629-7e53f770f126/volumes" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.885095 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5172795a-9886-4a3d-8199-04cfcefc1c0f" path="/var/lib/kubelet/pods/5172795a-9886-4a3d-8199-04cfcefc1c0f/volumes" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.950609 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" Jan 30 21:48:59 crc kubenswrapper[4869]: I0130 21:48:59.959437 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:49:00 crc kubenswrapper[4869]: I0130 21:49:00.385630 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl"] Jan 30 21:49:00 crc kubenswrapper[4869]: I0130 21:49:00.440705 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-856f74b64f-xclmx"] Jan 30 21:49:01 crc kubenswrapper[4869]: I0130 21:49:01.214225 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" event={"ID":"be774e7b-3a10-48ad-84fe-6c6d880fc454","Type":"ContainerStarted","Data":"9c916ae0f815255729fe11860de61766a3bd97e717e0612452df006da648b2d8"} Jan 30 21:49:01 crc kubenswrapper[4869]: I0130 21:49:01.214555 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" event={"ID":"be774e7b-3a10-48ad-84fe-6c6d880fc454","Type":"ContainerStarted","Data":"a5f142600a0f119a1da691a07141e050b4ecac2f12cffcd52b1c8d5134ddd32a"} Jan 30 21:49:01 crc kubenswrapper[4869]: I0130 21:49:01.214577 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:49:01 crc kubenswrapper[4869]: I0130 21:49:01.215792 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" event={"ID":"f2687bbe-78dd-4ad2-8180-8bd4698db21e","Type":"ContainerStarted","Data":"3712e17c33d7679a708868ecc991b35b6757c1cebd73a65dce72c4e6bde85f1d"} Jan 30 21:49:01 crc kubenswrapper[4869]: I0130 21:49:01.215820 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" event={"ID":"f2687bbe-78dd-4ad2-8180-8bd4698db21e","Type":"ContainerStarted","Data":"a64e4ca69b5f43fc47d3f2880b06be2a7f87d1e610bed7e7601b3224d2741bc0"} Jan 30 21:49:01 crc kubenswrapper[4869]: I0130 21:49:01.216015 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" Jan 30 21:49:01 crc kubenswrapper[4869]: I0130 21:49:01.219350 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:49:01 crc kubenswrapper[4869]: I0130 21:49:01.220464 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" Jan 30 21:49:01 crc kubenswrapper[4869]: I0130 21:49:01.230189 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" podStartSLOduration=4.230167469 podStartE2EDuration="4.230167469s" podCreationTimestamp="2026-01-30 21:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:49:01.228740405 +0000 UTC m=+342.114498440" watchObservedRunningTime="2026-01-30 21:49:01.230167469 +0000 UTC m=+342.115925494" Jan 30 21:49:01 crc kubenswrapper[4869]: I0130 21:49:01.307180 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" podStartSLOduration=4.307161608 podStartE2EDuration="4.307161608s" podCreationTimestamp="2026-01-30 21:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:49:01.306229159 +0000 UTC m=+342.191987194" watchObservedRunningTime="2026-01-30 21:49:01.307161608 +0000 UTC m=+342.192919633" Jan 30 21:49:17 crc kubenswrapper[4869]: I0130 21:49:17.682587 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-856f74b64f-xclmx"] Jan 30 21:49:17 crc kubenswrapper[4869]: I0130 21:49:17.683281 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" podUID="be774e7b-3a10-48ad-84fe-6c6d880fc454" containerName="controller-manager" containerID="cri-o://9c916ae0f815255729fe11860de61766a3bd97e717e0612452df006da648b2d8" gracePeriod=30 Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.221229 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.326345 4869 generic.go:334] "Generic (PLEG): container finished" podID="be774e7b-3a10-48ad-84fe-6c6d880fc454" containerID="9c916ae0f815255729fe11860de61766a3bd97e717e0612452df006da648b2d8" exitCode=0 Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.326397 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" event={"ID":"be774e7b-3a10-48ad-84fe-6c6d880fc454","Type":"ContainerDied","Data":"9c916ae0f815255729fe11860de61766a3bd97e717e0612452df006da648b2d8"} Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.326407 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.326430 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856f74b64f-xclmx" event={"ID":"be774e7b-3a10-48ad-84fe-6c6d880fc454","Type":"ContainerDied","Data":"a5f142600a0f119a1da691a07141e050b4ecac2f12cffcd52b1c8d5134ddd32a"} Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.326453 4869 scope.go:117] "RemoveContainer" containerID="9c916ae0f815255729fe11860de61766a3bd97e717e0612452df006da648b2d8" Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.341676 4869 scope.go:117] "RemoveContainer" containerID="9c916ae0f815255729fe11860de61766a3bd97e717e0612452df006da648b2d8" Jan 30 21:49:18 crc kubenswrapper[4869]: E0130 21:49:18.342077 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c916ae0f815255729fe11860de61766a3bd97e717e0612452df006da648b2d8\": container with ID starting with 9c916ae0f815255729fe11860de61766a3bd97e717e0612452df006da648b2d8 not found: ID does not exist" containerID="9c916ae0f815255729fe11860de61766a3bd97e717e0612452df006da648b2d8" Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.342104 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c916ae0f815255729fe11860de61766a3bd97e717e0612452df006da648b2d8"} err="failed to get container status \"9c916ae0f815255729fe11860de61766a3bd97e717e0612452df006da648b2d8\": rpc error: code = NotFound desc = could not find container \"9c916ae0f815255729fe11860de61766a3bd97e717e0612452df006da648b2d8\": container with ID starting with 9c916ae0f815255729fe11860de61766a3bd97e717e0612452df006da648b2d8 not found: ID does not exist" Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.392127 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be774e7b-3a10-48ad-84fe-6c6d880fc454-serving-cert\") pod \"be774e7b-3a10-48ad-84fe-6c6d880fc454\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.392191 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsrg2\" (UniqueName: \"kubernetes.io/projected/be774e7b-3a10-48ad-84fe-6c6d880fc454-kube-api-access-vsrg2\") pod \"be774e7b-3a10-48ad-84fe-6c6d880fc454\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.392222 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be774e7b-3a10-48ad-84fe-6c6d880fc454-config\") pod \"be774e7b-3a10-48ad-84fe-6c6d880fc454\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.392267 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be774e7b-3a10-48ad-84fe-6c6d880fc454-client-ca\") pod \"be774e7b-3a10-48ad-84fe-6c6d880fc454\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.392285 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be774e7b-3a10-48ad-84fe-6c6d880fc454-proxy-ca-bundles\") pod \"be774e7b-3a10-48ad-84fe-6c6d880fc454\" (UID: \"be774e7b-3a10-48ad-84fe-6c6d880fc454\") " Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.392969 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be774e7b-3a10-48ad-84fe-6c6d880fc454-client-ca" (OuterVolumeSpecName: "client-ca") pod "be774e7b-3a10-48ad-84fe-6c6d880fc454" (UID: "be774e7b-3a10-48ad-84fe-6c6d880fc454"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.393050 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be774e7b-3a10-48ad-84fe-6c6d880fc454-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "be774e7b-3a10-48ad-84fe-6c6d880fc454" (UID: "be774e7b-3a10-48ad-84fe-6c6d880fc454"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.393533 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be774e7b-3a10-48ad-84fe-6c6d880fc454-config" (OuterVolumeSpecName: "config") pod "be774e7b-3a10-48ad-84fe-6c6d880fc454" (UID: "be774e7b-3a10-48ad-84fe-6c6d880fc454"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.397585 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be774e7b-3a10-48ad-84fe-6c6d880fc454-kube-api-access-vsrg2" (OuterVolumeSpecName: "kube-api-access-vsrg2") pod "be774e7b-3a10-48ad-84fe-6c6d880fc454" (UID: "be774e7b-3a10-48ad-84fe-6c6d880fc454"). InnerVolumeSpecName "kube-api-access-vsrg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.407681 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be774e7b-3a10-48ad-84fe-6c6d880fc454-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "be774e7b-3a10-48ad-84fe-6c6d880fc454" (UID: "be774e7b-3a10-48ad-84fe-6c6d880fc454"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.493188 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be774e7b-3a10-48ad-84fe-6c6d880fc454-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.493238 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsrg2\" (UniqueName: \"kubernetes.io/projected/be774e7b-3a10-48ad-84fe-6c6d880fc454-kube-api-access-vsrg2\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.493250 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be774e7b-3a10-48ad-84fe-6c6d880fc454-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.493257 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be774e7b-3a10-48ad-84fe-6c6d880fc454-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.493266 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be774e7b-3a10-48ad-84fe-6c6d880fc454-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.652611 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-856f74b64f-xclmx"] Jan 30 21:49:18 crc kubenswrapper[4869]: I0130 21:49:18.658042 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-856f74b64f-xclmx"] Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.644127 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cd599f9ff-j698v"] Jan 30 21:49:19 crc kubenswrapper[4869]: E0130 21:49:19.644337 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be774e7b-3a10-48ad-84fe-6c6d880fc454" containerName="controller-manager" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.644349 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="be774e7b-3a10-48ad-84fe-6c6d880fc454" containerName="controller-manager" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.644451 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="be774e7b-3a10-48ad-84fe-6c6d880fc454" containerName="controller-manager" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.644788 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.647115 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.647447 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.648131 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.648329 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.648406 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.648531 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.655701 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.657880 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cd599f9ff-j698v"] Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.806482 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcrss\" (UniqueName: \"kubernetes.io/projected/c6c7d7b0-e140-4b76-a8d2-9e43c507bad8-kube-api-access-pcrss\") pod \"controller-manager-6cd599f9ff-j698v\" (UID: \"c6c7d7b0-e140-4b76-a8d2-9e43c507bad8\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.806552 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c7d7b0-e140-4b76-a8d2-9e43c507bad8-serving-cert\") pod \"controller-manager-6cd599f9ff-j698v\" (UID: \"c6c7d7b0-e140-4b76-a8d2-9e43c507bad8\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.806616 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6c7d7b0-e140-4b76-a8d2-9e43c507bad8-proxy-ca-bundles\") pod \"controller-manager-6cd599f9ff-j698v\" (UID: \"c6c7d7b0-e140-4b76-a8d2-9e43c507bad8\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.806655 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6c7d7b0-e140-4b76-a8d2-9e43c507bad8-client-ca\") pod \"controller-manager-6cd599f9ff-j698v\" (UID: \"c6c7d7b0-e140-4b76-a8d2-9e43c507bad8\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.806691 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c7d7b0-e140-4b76-a8d2-9e43c507bad8-config\") pod \"controller-manager-6cd599f9ff-j698v\" (UID: \"c6c7d7b0-e140-4b76-a8d2-9e43c507bad8\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.884837 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be774e7b-3a10-48ad-84fe-6c6d880fc454" path="/var/lib/kubelet/pods/be774e7b-3a10-48ad-84fe-6c6d880fc454/volumes" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.907484 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcrss\" (UniqueName: \"kubernetes.io/projected/c6c7d7b0-e140-4b76-a8d2-9e43c507bad8-kube-api-access-pcrss\") pod \"controller-manager-6cd599f9ff-j698v\" (UID: \"c6c7d7b0-e140-4b76-a8d2-9e43c507bad8\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.907861 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c7d7b0-e140-4b76-a8d2-9e43c507bad8-serving-cert\") pod \"controller-manager-6cd599f9ff-j698v\" (UID: \"c6c7d7b0-e140-4b76-a8d2-9e43c507bad8\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.908036 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6c7d7b0-e140-4b76-a8d2-9e43c507bad8-proxy-ca-bundles\") pod \"controller-manager-6cd599f9ff-j698v\" (UID: \"c6c7d7b0-e140-4b76-a8d2-9e43c507bad8\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.908280 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6c7d7b0-e140-4b76-a8d2-9e43c507bad8-client-ca\") pod \"controller-manager-6cd599f9ff-j698v\" (UID: \"c6c7d7b0-e140-4b76-a8d2-9e43c507bad8\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.909029 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c7d7b0-e140-4b76-a8d2-9e43c507bad8-config\") pod \"controller-manager-6cd599f9ff-j698v\" (UID: \"c6c7d7b0-e140-4b76-a8d2-9e43c507bad8\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.910019 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.910634 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.910770 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.916847 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.920101 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6c7d7b0-e140-4b76-a8d2-9e43c507bad8-client-ca\") pod \"controller-manager-6cd599f9ff-j698v\" (UID: \"c6c7d7b0-e140-4b76-a8d2-9e43c507bad8\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.920285 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6c7d7b0-e140-4b76-a8d2-9e43c507bad8-proxy-ca-bundles\") pod \"controller-manager-6cd599f9ff-j698v\" (UID: \"c6c7d7b0-e140-4b76-a8d2-9e43c507bad8\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.920555 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c7d7b0-e140-4b76-a8d2-9e43c507bad8-config\") pod \"controller-manager-6cd599f9ff-j698v\" (UID: \"c6c7d7b0-e140-4b76-a8d2-9e43c507bad8\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.922853 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c7d7b0-e140-4b76-a8d2-9e43c507bad8-serving-cert\") pod \"controller-manager-6cd599f9ff-j698v\" (UID: \"c6c7d7b0-e140-4b76-a8d2-9e43c507bad8\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.924929 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.934640 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.949970 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcrss\" (UniqueName: \"kubernetes.io/projected/c6c7d7b0-e140-4b76-a8d2-9e43c507bad8-kube-api-access-pcrss\") pod \"controller-manager-6cd599f9ff-j698v\" (UID: \"c6c7d7b0-e140-4b76-a8d2-9e43c507bad8\") " pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.963297 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:49:19 crc kubenswrapper[4869]: I0130 21:49:19.971858 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:20 crc kubenswrapper[4869]: I0130 21:49:20.407686 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cd599f9ff-j698v"] Jan 30 21:49:20 crc kubenswrapper[4869]: W0130 21:49:20.415811 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6c7d7b0_e140_4b76_a8d2_9e43c507bad8.slice/crio-a17c43228f5aaa9786140e1f004553a6ee6669bfd4b687635bf113a8a276f10d WatchSource:0}: Error finding container a17c43228f5aaa9786140e1f004553a6ee6669bfd4b687635bf113a8a276f10d: Status 404 returned error can't find the container with id a17c43228f5aaa9786140e1f004553a6ee6669bfd4b687635bf113a8a276f10d Jan 30 21:49:21 crc kubenswrapper[4869]: I0130 21:49:21.350291 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" event={"ID":"c6c7d7b0-e140-4b76-a8d2-9e43c507bad8","Type":"ContainerStarted","Data":"f13d7fd45014b2c4479b17bc3f8c68b29fb48ab34799ea22e4ce604a5dee5ca9"} Jan 30 21:49:21 crc kubenswrapper[4869]: I0130 21:49:21.350824 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:21 crc kubenswrapper[4869]: I0130 21:49:21.350836 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" event={"ID":"c6c7d7b0-e140-4b76-a8d2-9e43c507bad8","Type":"ContainerStarted","Data":"a17c43228f5aaa9786140e1f004553a6ee6669bfd4b687635bf113a8a276f10d"} Jan 30 21:49:21 crc kubenswrapper[4869]: I0130 21:49:21.357324 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" Jan 30 21:49:21 crc kubenswrapper[4869]: I0130 21:49:21.366179 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cd599f9ff-j698v" podStartSLOduration=4.366163627 podStartE2EDuration="4.366163627s" podCreationTimestamp="2026-01-30 21:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:49:21.362967427 +0000 UTC m=+362.248725462" watchObservedRunningTime="2026-01-30 21:49:21.366163627 +0000 UTC m=+362.251921652" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.014951 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" podUID="e7c790ef-ae52-4809-b6f2-088811793867" containerName="oauth-openshift" containerID="cri-o://d2c17cd7962638c1d69167c3957921e692a78b0a67de0c31a0afa22696cd0a23" gracePeriod=15 Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.357613 4869 generic.go:334] "Generic (PLEG): container finished" podID="e7c790ef-ae52-4809-b6f2-088811793867" containerID="d2c17cd7962638c1d69167c3957921e692a78b0a67de0c31a0afa22696cd0a23" exitCode=0 Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.358089 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" event={"ID":"e7c790ef-ae52-4809-b6f2-088811793867","Type":"ContainerDied","Data":"d2c17cd7962638c1d69167c3957921e692a78b0a67de0c31a0afa22696cd0a23"} Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.441925 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.541088 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-router-certs\") pod \"e7c790ef-ae52-4809-b6f2-088811793867\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.541152 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-trusted-ca-bundle\") pod \"e7c790ef-ae52-4809-b6f2-088811793867\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.541181 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-ocp-branding-template\") pod \"e7c790ef-ae52-4809-b6f2-088811793867\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.541200 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-template-login\") pod \"e7c790ef-ae52-4809-b6f2-088811793867\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.541220 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-template-error\") pod \"e7c790ef-ae52-4809-b6f2-088811793867\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.541239 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-audit-policies\") pod \"e7c790ef-ae52-4809-b6f2-088811793867\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.541272 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm6zq\" (UniqueName: \"kubernetes.io/projected/e7c790ef-ae52-4809-b6f2-088811793867-kube-api-access-wm6zq\") pod \"e7c790ef-ae52-4809-b6f2-088811793867\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.541300 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-cliconfig\") pod \"e7c790ef-ae52-4809-b6f2-088811793867\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.541320 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7c790ef-ae52-4809-b6f2-088811793867-audit-dir\") pod \"e7c790ef-ae52-4809-b6f2-088811793867\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.541338 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-service-ca\") pod \"e7c790ef-ae52-4809-b6f2-088811793867\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.541354 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-serving-cert\") pod \"e7c790ef-ae52-4809-b6f2-088811793867\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.541381 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-idp-0-file-data\") pod \"e7c790ef-ae52-4809-b6f2-088811793867\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.541401 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-template-provider-selection\") pod \"e7c790ef-ae52-4809-b6f2-088811793867\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.541445 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-session\") pod \"e7c790ef-ae52-4809-b6f2-088811793867\" (UID: \"e7c790ef-ae52-4809-b6f2-088811793867\") " Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.542050 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e7c790ef-ae52-4809-b6f2-088811793867" (UID: "e7c790ef-ae52-4809-b6f2-088811793867"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.542798 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7c790ef-ae52-4809-b6f2-088811793867-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e7c790ef-ae52-4809-b6f2-088811793867" (UID: "e7c790ef-ae52-4809-b6f2-088811793867"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.542866 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e7c790ef-ae52-4809-b6f2-088811793867" (UID: "e7c790ef-ae52-4809-b6f2-088811793867"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.543307 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e7c790ef-ae52-4809-b6f2-088811793867" (UID: "e7c790ef-ae52-4809-b6f2-088811793867"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.543381 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e7c790ef-ae52-4809-b6f2-088811793867" (UID: "e7c790ef-ae52-4809-b6f2-088811793867"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.547143 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e7c790ef-ae52-4809-b6f2-088811793867" (UID: "e7c790ef-ae52-4809-b6f2-088811793867"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.547699 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e7c790ef-ae52-4809-b6f2-088811793867" (UID: "e7c790ef-ae52-4809-b6f2-088811793867"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.547973 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e7c790ef-ae52-4809-b6f2-088811793867" (UID: "e7c790ef-ae52-4809-b6f2-088811793867"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.548327 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e7c790ef-ae52-4809-b6f2-088811793867" (UID: "e7c790ef-ae52-4809-b6f2-088811793867"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.552519 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e7c790ef-ae52-4809-b6f2-088811793867" (UID: "e7c790ef-ae52-4809-b6f2-088811793867"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.560280 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e7c790ef-ae52-4809-b6f2-088811793867" (UID: "e7c790ef-ae52-4809-b6f2-088811793867"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.560280 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e7c790ef-ae52-4809-b6f2-088811793867" (UID: "e7c790ef-ae52-4809-b6f2-088811793867"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.560518 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c790ef-ae52-4809-b6f2-088811793867-kube-api-access-wm6zq" (OuterVolumeSpecName: "kube-api-access-wm6zq") pod "e7c790ef-ae52-4809-b6f2-088811793867" (UID: "e7c790ef-ae52-4809-b6f2-088811793867"). InnerVolumeSpecName "kube-api-access-wm6zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.567509 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e7c790ef-ae52-4809-b6f2-088811793867" (UID: "e7c790ef-ae52-4809-b6f2-088811793867"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.642532 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.642568 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.642583 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.642593 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.642606 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.642615 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.642623 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.642633 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.642642 4869 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.642650 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm6zq\" (UniqueName: \"kubernetes.io/projected/e7c790ef-ae52-4809-b6f2-088811793867-kube-api-access-wm6zq\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.642660 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.642668 4869 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7c790ef-ae52-4809-b6f2-088811793867-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.642676 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:22 crc kubenswrapper[4869]: I0130 21:49:22.642685 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7c790ef-ae52-4809-b6f2-088811793867-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:23 crc kubenswrapper[4869]: I0130 21:49:23.365611 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" event={"ID":"e7c790ef-ae52-4809-b6f2-088811793867","Type":"ContainerDied","Data":"81d5aaa937f32ae2eeb2cddcf651a58af14e88752d8f0a54dd9218a6d3b3b6d3"} Jan 30 21:49:23 crc kubenswrapper[4869]: I0130 21:49:23.366020 4869 scope.go:117] "RemoveContainer" containerID="d2c17cd7962638c1d69167c3957921e692a78b0a67de0c31a0afa22696cd0a23" Jan 30 21:49:23 crc kubenswrapper[4869]: I0130 21:49:23.365667 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fgmnt" Jan 30 21:49:23 crc kubenswrapper[4869]: I0130 21:49:23.397914 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fgmnt"] Jan 30 21:49:23 crc kubenswrapper[4869]: I0130 21:49:23.402032 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fgmnt"] Jan 30 21:49:23 crc kubenswrapper[4869]: I0130 21:49:23.883688 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7c790ef-ae52-4809-b6f2-088811793867" path="/var/lib/kubelet/pods/e7c790ef-ae52-4809-b6f2-088811793867/volumes" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.650163 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7845fc8b9c-f25q6"] Jan 30 21:49:24 crc kubenswrapper[4869]: E0130 21:49:24.650374 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c790ef-ae52-4809-b6f2-088811793867" containerName="oauth-openshift" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.650385 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c790ef-ae52-4809-b6f2-088811793867" containerName="oauth-openshift" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.650486 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7c790ef-ae52-4809-b6f2-088811793867" containerName="oauth-openshift" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.650834 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.654055 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.654369 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.654497 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.654602 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.655049 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.655401 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.656647 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.656873 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.657322 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.657342 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.657819 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.657943 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.661560 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7845fc8b9c-f25q6"] Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.665861 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.666759 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.670336 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.768384 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-user-template-login\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.768446 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.768478 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.768534 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.768589 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-router-certs\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.768618 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.768642 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.768674 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-user-template-error\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.768692 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.768712 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/02937dfe-4221-496f-bd15-34db64abf46f-audit-dir\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.768769 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-session\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.768791 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtwkg\" (UniqueName: \"kubernetes.io/projected/02937dfe-4221-496f-bd15-34db64abf46f-kube-api-access-vtwkg\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.768811 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/02937dfe-4221-496f-bd15-34db64abf46f-audit-policies\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.768846 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-service-ca\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.869539 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.869592 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-router-certs\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.869613 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.869635 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.869680 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-user-template-error\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.869706 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.869731 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/02937dfe-4221-496f-bd15-34db64abf46f-audit-dir\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.869759 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-session\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.869779 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtwkg\" (UniqueName: \"kubernetes.io/projected/02937dfe-4221-496f-bd15-34db64abf46f-kube-api-access-vtwkg\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.869807 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/02937dfe-4221-496f-bd15-34db64abf46f-audit-policies\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.869839 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/02937dfe-4221-496f-bd15-34db64abf46f-audit-dir\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.869848 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-service-ca\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.869908 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-user-template-login\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.869935 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.869956 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.870453 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.870538 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-service-ca\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.871003 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/02937dfe-4221-496f-bd15-34db64abf46f-audit-policies\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.872554 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.874672 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.874714 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.874724 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.874781 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-user-template-error\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.875498 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-router-certs\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.876240 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.884087 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-system-session\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.884222 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/02937dfe-4221-496f-bd15-34db64abf46f-v4-0-config-user-template-login\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.886846 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtwkg\" (UniqueName: \"kubernetes.io/projected/02937dfe-4221-496f-bd15-34db64abf46f-kube-api-access-vtwkg\") pod \"oauth-openshift-7845fc8b9c-f25q6\" (UID: \"02937dfe-4221-496f-bd15-34db64abf46f\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:24 crc kubenswrapper[4869]: I0130 21:49:24.966621 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:25 crc kubenswrapper[4869]: I0130 21:49:25.376403 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7845fc8b9c-f25q6"] Jan 30 21:49:26 crc kubenswrapper[4869]: I0130 21:49:26.383231 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" event={"ID":"02937dfe-4221-496f-bd15-34db64abf46f","Type":"ContainerStarted","Data":"2b5cd3dd708a5226894610d408869a6fee205123190898e37b30b9b04be6d4bf"} Jan 30 21:49:26 crc kubenswrapper[4869]: I0130 21:49:26.383604 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" event={"ID":"02937dfe-4221-496f-bd15-34db64abf46f","Type":"ContainerStarted","Data":"b06361a9835b0f469f128a73d0de83c581ea5c8b00e30f403d75f70f16bd38d2"} Jan 30 21:49:26 crc kubenswrapper[4869]: I0130 21:49:26.383620 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:26 crc kubenswrapper[4869]: I0130 21:49:26.388246 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" Jan 30 21:49:26 crc kubenswrapper[4869]: I0130 21:49:26.401656 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7845fc8b9c-f25q6" podStartSLOduration=29.401637564 podStartE2EDuration="29.401637564s" podCreationTimestamp="2026-01-30 21:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:49:26.399113556 +0000 UTC m=+367.284871581" watchObservedRunningTime="2026-01-30 21:49:26.401637564 +0000 UTC m=+367.287395589" Jan 30 21:49:31 crc kubenswrapper[4869]: I0130 21:49:31.990953 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:49:31 crc kubenswrapper[4869]: I0130 21:49:31.991531 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:49:37 crc kubenswrapper[4869]: I0130 21:49:37.673330 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl"] Jan 30 21:49:37 crc kubenswrapper[4869]: I0130 21:49:37.674141 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" podUID="f2687bbe-78dd-4ad2-8180-8bd4698db21e" containerName="route-controller-manager" containerID="cri-o://3712e17c33d7679a708868ecc991b35b6757c1cebd73a65dce72c4e6bde85f1d" gracePeriod=30 Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.445941 4869 generic.go:334] "Generic (PLEG): container finished" podID="f2687bbe-78dd-4ad2-8180-8bd4698db21e" containerID="3712e17c33d7679a708868ecc991b35b6757c1cebd73a65dce72c4e6bde85f1d" exitCode=0 Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.446001 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" event={"ID":"f2687bbe-78dd-4ad2-8180-8bd4698db21e","Type":"ContainerDied","Data":"3712e17c33d7679a708868ecc991b35b6757c1cebd73a65dce72c4e6bde85f1d"} Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.677133 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.704767 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf"] Jan 30 21:49:38 crc kubenswrapper[4869]: E0130 21:49:38.705554 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2687bbe-78dd-4ad2-8180-8bd4698db21e" containerName="route-controller-manager" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.705567 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2687bbe-78dd-4ad2-8180-8bd4698db21e" containerName="route-controller-manager" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.705660 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2687bbe-78dd-4ad2-8180-8bd4698db21e" containerName="route-controller-manager" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.706078 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.716359 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf"] Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.838285 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2687bbe-78dd-4ad2-8180-8bd4698db21e-serving-cert\") pod \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\" (UID: \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\") " Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.838498 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tzj2\" (UniqueName: \"kubernetes.io/projected/f2687bbe-78dd-4ad2-8180-8bd4698db21e-kube-api-access-8tzj2\") pod \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\" (UID: \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\") " Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.838593 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2687bbe-78dd-4ad2-8180-8bd4698db21e-client-ca\") pod \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\" (UID: \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\") " Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.838639 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2687bbe-78dd-4ad2-8180-8bd4698db21e-config\") pod \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\" (UID: \"f2687bbe-78dd-4ad2-8180-8bd4698db21e\") " Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.838992 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xjnn\" (UniqueName: \"kubernetes.io/projected/c3fe7a84-87f9-48ff-9982-15d711e13fcc-kube-api-access-7xjnn\") pod \"route-controller-manager-54599f87c8-gfvxf\" (UID: \"c3fe7a84-87f9-48ff-9982-15d711e13fcc\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.839046 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3fe7a84-87f9-48ff-9982-15d711e13fcc-config\") pod \"route-controller-manager-54599f87c8-gfvxf\" (UID: \"c3fe7a84-87f9-48ff-9982-15d711e13fcc\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.839100 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fe7a84-87f9-48ff-9982-15d711e13fcc-serving-cert\") pod \"route-controller-manager-54599f87c8-gfvxf\" (UID: \"c3fe7a84-87f9-48ff-9982-15d711e13fcc\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.839154 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3fe7a84-87f9-48ff-9982-15d711e13fcc-client-ca\") pod \"route-controller-manager-54599f87c8-gfvxf\" (UID: \"c3fe7a84-87f9-48ff-9982-15d711e13fcc\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.840045 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2687bbe-78dd-4ad2-8180-8bd4698db21e-client-ca" (OuterVolumeSpecName: "client-ca") pod "f2687bbe-78dd-4ad2-8180-8bd4698db21e" (UID: "f2687bbe-78dd-4ad2-8180-8bd4698db21e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.840375 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2687bbe-78dd-4ad2-8180-8bd4698db21e-config" (OuterVolumeSpecName: "config") pod "f2687bbe-78dd-4ad2-8180-8bd4698db21e" (UID: "f2687bbe-78dd-4ad2-8180-8bd4698db21e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.847427 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2687bbe-78dd-4ad2-8180-8bd4698db21e-kube-api-access-8tzj2" (OuterVolumeSpecName: "kube-api-access-8tzj2") pod "f2687bbe-78dd-4ad2-8180-8bd4698db21e" (UID: "f2687bbe-78dd-4ad2-8180-8bd4698db21e"). InnerVolumeSpecName "kube-api-access-8tzj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.852352 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2687bbe-78dd-4ad2-8180-8bd4698db21e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f2687bbe-78dd-4ad2-8180-8bd4698db21e" (UID: "f2687bbe-78dd-4ad2-8180-8bd4698db21e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.940960 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3fe7a84-87f9-48ff-9982-15d711e13fcc-client-ca\") pod \"route-controller-manager-54599f87c8-gfvxf\" (UID: \"c3fe7a84-87f9-48ff-9982-15d711e13fcc\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.941077 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xjnn\" (UniqueName: \"kubernetes.io/projected/c3fe7a84-87f9-48ff-9982-15d711e13fcc-kube-api-access-7xjnn\") pod \"route-controller-manager-54599f87c8-gfvxf\" (UID: \"c3fe7a84-87f9-48ff-9982-15d711e13fcc\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.941106 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3fe7a84-87f9-48ff-9982-15d711e13fcc-config\") pod \"route-controller-manager-54599f87c8-gfvxf\" (UID: \"c3fe7a84-87f9-48ff-9982-15d711e13fcc\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.941142 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fe7a84-87f9-48ff-9982-15d711e13fcc-serving-cert\") pod \"route-controller-manager-54599f87c8-gfvxf\" (UID: \"c3fe7a84-87f9-48ff-9982-15d711e13fcc\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.941182 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2687bbe-78dd-4ad2-8180-8bd4698db21e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.941193 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2687bbe-78dd-4ad2-8180-8bd4698db21e-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.941201 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2687bbe-78dd-4ad2-8180-8bd4698db21e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.941210 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tzj2\" (UniqueName: \"kubernetes.io/projected/f2687bbe-78dd-4ad2-8180-8bd4698db21e-kube-api-access-8tzj2\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.942834 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3fe7a84-87f9-48ff-9982-15d711e13fcc-client-ca\") pod \"route-controller-manager-54599f87c8-gfvxf\" (UID: \"c3fe7a84-87f9-48ff-9982-15d711e13fcc\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.944526 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3fe7a84-87f9-48ff-9982-15d711e13fcc-config\") pod \"route-controller-manager-54599f87c8-gfvxf\" (UID: \"c3fe7a84-87f9-48ff-9982-15d711e13fcc\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.949273 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fe7a84-87f9-48ff-9982-15d711e13fcc-serving-cert\") pod \"route-controller-manager-54599f87c8-gfvxf\" (UID: \"c3fe7a84-87f9-48ff-9982-15d711e13fcc\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" Jan 30 21:49:38 crc kubenswrapper[4869]: I0130 21:49:38.963011 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xjnn\" (UniqueName: \"kubernetes.io/projected/c3fe7a84-87f9-48ff-9982-15d711e13fcc-kube-api-access-7xjnn\") pod \"route-controller-manager-54599f87c8-gfvxf\" (UID: \"c3fe7a84-87f9-48ff-9982-15d711e13fcc\") " pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" Jan 30 21:49:39 crc kubenswrapper[4869]: I0130 21:49:39.030233 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" Jan 30 21:49:39 crc kubenswrapper[4869]: I0130 21:49:39.453502 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" event={"ID":"f2687bbe-78dd-4ad2-8180-8bd4698db21e","Type":"ContainerDied","Data":"a64e4ca69b5f43fc47d3f2880b06be2a7f87d1e610bed7e7601b3224d2741bc0"} Jan 30 21:49:39 crc kubenswrapper[4869]: I0130 21:49:39.453571 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl" Jan 30 21:49:39 crc kubenswrapper[4869]: I0130 21:49:39.453800 4869 scope.go:117] "RemoveContainer" containerID="3712e17c33d7679a708868ecc991b35b6757c1cebd73a65dce72c4e6bde85f1d" Jan 30 21:49:39 crc kubenswrapper[4869]: I0130 21:49:39.461709 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf"] Jan 30 21:49:39 crc kubenswrapper[4869]: W0130 21:49:39.470302 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3fe7a84_87f9_48ff_9982_15d711e13fcc.slice/crio-e3a8693d0b7ae985cb83456935549ca2a29084ec94ec8e6947b2c2470ae6c3f5 WatchSource:0}: Error finding container e3a8693d0b7ae985cb83456935549ca2a29084ec94ec8e6947b2c2470ae6c3f5: Status 404 returned error can't find the container with id e3a8693d0b7ae985cb83456935549ca2a29084ec94ec8e6947b2c2470ae6c3f5 Jan 30 21:49:39 crc kubenswrapper[4869]: I0130 21:49:39.489573 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl"] Jan 30 21:49:39 crc kubenswrapper[4869]: I0130 21:49:39.493371 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67cc8d88b-rgfvl"] Jan 30 21:49:39 crc kubenswrapper[4869]: I0130 21:49:39.886618 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2687bbe-78dd-4ad2-8180-8bd4698db21e" path="/var/lib/kubelet/pods/f2687bbe-78dd-4ad2-8180-8bd4698db21e/volumes" Jan 30 21:49:40 crc kubenswrapper[4869]: I0130 21:49:40.461467 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" event={"ID":"c3fe7a84-87f9-48ff-9982-15d711e13fcc","Type":"ContainerStarted","Data":"47add3cf9d99479117015f3e1ca35bf1cc9202d60f7a663b03172e34c6c7f1d6"} Jan 30 21:49:40 crc kubenswrapper[4869]: I0130 21:49:40.461550 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" event={"ID":"c3fe7a84-87f9-48ff-9982-15d711e13fcc","Type":"ContainerStarted","Data":"e3a8693d0b7ae985cb83456935549ca2a29084ec94ec8e6947b2c2470ae6c3f5"} Jan 30 21:49:40 crc kubenswrapper[4869]: I0130 21:49:40.462239 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" Jan 30 21:49:40 crc kubenswrapper[4869]: I0130 21:49:40.467004 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" Jan 30 21:49:40 crc kubenswrapper[4869]: I0130 21:49:40.477876 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54599f87c8-gfvxf" podStartSLOduration=3.47785589 podStartE2EDuration="3.47785589s" podCreationTimestamp="2026-01-30 21:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:49:40.47754578 +0000 UTC m=+381.363303845" watchObservedRunningTime="2026-01-30 21:49:40.47785589 +0000 UTC m=+381.363613925" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.372775 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vgcwp"] Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.375466 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.401649 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vgcwp"] Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.575579 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79a6f575-0c5e-4130-894f-a78ff06ddcce-registry-tls\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.575688 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79a6f575-0c5e-4130-894f-a78ff06ddcce-registry-certificates\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.575730 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79a6f575-0c5e-4130-894f-a78ff06ddcce-bound-sa-token\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.575779 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.575825 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79a6f575-0c5e-4130-894f-a78ff06ddcce-trusted-ca\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.575983 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79a6f575-0c5e-4130-894f-a78ff06ddcce-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.576036 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79a6f575-0c5e-4130-894f-a78ff06ddcce-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.576087 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rntdb\" (UniqueName: \"kubernetes.io/projected/79a6f575-0c5e-4130-894f-a78ff06ddcce-kube-api-access-rntdb\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.607583 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.677639 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79a6f575-0c5e-4130-894f-a78ff06ddcce-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.677693 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79a6f575-0c5e-4130-894f-a78ff06ddcce-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.677728 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rntdb\" (UniqueName: \"kubernetes.io/projected/79a6f575-0c5e-4130-894f-a78ff06ddcce-kube-api-access-rntdb\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.677761 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79a6f575-0c5e-4130-894f-a78ff06ddcce-registry-tls\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.677782 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79a6f575-0c5e-4130-894f-a78ff06ddcce-bound-sa-token\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.677803 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79a6f575-0c5e-4130-894f-a78ff06ddcce-registry-certificates\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.677833 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79a6f575-0c5e-4130-894f-a78ff06ddcce-trusted-ca\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.678232 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79a6f575-0c5e-4130-894f-a78ff06ddcce-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.678980 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79a6f575-0c5e-4130-894f-a78ff06ddcce-trusted-ca\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.679847 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79a6f575-0c5e-4130-894f-a78ff06ddcce-registry-certificates\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.685786 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79a6f575-0c5e-4130-894f-a78ff06ddcce-registry-tls\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.686377 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79a6f575-0c5e-4130-894f-a78ff06ddcce-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.699253 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79a6f575-0c5e-4130-894f-a78ff06ddcce-bound-sa-token\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.701206 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rntdb\" (UniqueName: \"kubernetes.io/projected/79a6f575-0c5e-4130-894f-a78ff06ddcce-kube-api-access-rntdb\") pod \"image-registry-66df7c8f76-vgcwp\" (UID: \"79a6f575-0c5e-4130-894f-a78ff06ddcce\") " pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:54 crc kubenswrapper[4869]: I0130 21:49:54.993104 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:55 crc kubenswrapper[4869]: I0130 21:49:55.483088 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vgcwp"] Jan 30 21:49:55 crc kubenswrapper[4869]: W0130 21:49:55.493875 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79a6f575_0c5e_4130_894f_a78ff06ddcce.slice/crio-79a3147531b8ee8b4cd16ef6396c33d8761d10181e787db6a675ecb44b6b7cd6 WatchSource:0}: Error finding container 79a3147531b8ee8b4cd16ef6396c33d8761d10181e787db6a675ecb44b6b7cd6: Status 404 returned error can't find the container with id 79a3147531b8ee8b4cd16ef6396c33d8761d10181e787db6a675ecb44b6b7cd6 Jan 30 21:49:55 crc kubenswrapper[4869]: I0130 21:49:55.538189 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" event={"ID":"79a6f575-0c5e-4130-894f-a78ff06ddcce","Type":"ContainerStarted","Data":"79a3147531b8ee8b4cd16ef6396c33d8761d10181e787db6a675ecb44b6b7cd6"} Jan 30 21:49:56 crc kubenswrapper[4869]: I0130 21:49:56.544820 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" event={"ID":"79a6f575-0c5e-4130-894f-a78ff06ddcce","Type":"ContainerStarted","Data":"529948493f330c50a5e3df1cd1dfee300303b6fd8e6eb0a68f699ff95ea867c3"} Jan 30 21:49:56 crc kubenswrapper[4869]: I0130 21:49:56.545278 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:49:56 crc kubenswrapper[4869]: I0130 21:49:56.567374 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" podStartSLOduration=2.567352491 podStartE2EDuration="2.567352491s" podCreationTimestamp="2026-01-30 21:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:49:56.562839298 +0000 UTC m=+397.448597343" watchObservedRunningTime="2026-01-30 21:49:56.567352491 +0000 UTC m=+397.453110516" Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.662579 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6v7j4"] Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.665202 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6v7j4" podUID="4ebfcb0e-58a5-4ab1-894f-1a6093921531" containerName="registry-server" containerID="cri-o://5d6897835c78122de172fcbf0cf20ab7d1366470b8a90d6f577767f51c23b338" gracePeriod=30 Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.675957 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b2g6w"] Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.676275 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b2g6w" podUID="a79f57ed-ffe5-4f65-acd5-0bcd42e47a02" containerName="registry-server" containerID="cri-o://bdfe12b8ada59ecac25ece6b4064951aa03f2c9eae06860f9d608933bbe12a22" gracePeriod=30 Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.686271 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b86ql"] Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.686524 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" podUID="fca2992a-2cb5-4b86-9cfe-66d8dae76acb" containerName="marketplace-operator" containerID="cri-o://9766d8dfe52e4fbd53538cdf3bef77f8e1e3437d7f110649ae1e2c5861e301f1" gracePeriod=30 Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.700446 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rr69r"] Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.700713 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rr69r" podUID="b394f841-ea61-41c7-9b4b-7ad185073b70" containerName="registry-server" containerID="cri-o://2f1656b85d3ed469536399ca659885e08fcf346ea8c31f57fca6519a697b1b5d" gracePeriod=30 Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.712402 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-55fhb"] Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.713279 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-55fhb" Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.729965 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tsp88"] Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.730265 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tsp88" podUID="0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3" containerName="registry-server" containerID="cri-o://b40787daa337627e594426c009ef5978b05485802914ed1a3478a4acddd9e720" gracePeriod=30 Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.739598 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-55fhb"] Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.833857 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh29m\" (UniqueName: \"kubernetes.io/projected/059ebbdc-d9b5-4a32-a167-30dfeae746ff-kube-api-access-vh29m\") pod \"marketplace-operator-79b997595-55fhb\" (UID: \"059ebbdc-d9b5-4a32-a167-30dfeae746ff\") " pod="openshift-marketplace/marketplace-operator-79b997595-55fhb" Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.834213 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/059ebbdc-d9b5-4a32-a167-30dfeae746ff-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-55fhb\" (UID: \"059ebbdc-d9b5-4a32-a167-30dfeae746ff\") " pod="openshift-marketplace/marketplace-operator-79b997595-55fhb" Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.834306 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/059ebbdc-d9b5-4a32-a167-30dfeae746ff-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-55fhb\" (UID: \"059ebbdc-d9b5-4a32-a167-30dfeae746ff\") " pod="openshift-marketplace/marketplace-operator-79b997595-55fhb" Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.935620 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh29m\" (UniqueName: \"kubernetes.io/projected/059ebbdc-d9b5-4a32-a167-30dfeae746ff-kube-api-access-vh29m\") pod \"marketplace-operator-79b997595-55fhb\" (UID: \"059ebbdc-d9b5-4a32-a167-30dfeae746ff\") " pod="openshift-marketplace/marketplace-operator-79b997595-55fhb" Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.935752 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/059ebbdc-d9b5-4a32-a167-30dfeae746ff-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-55fhb\" (UID: \"059ebbdc-d9b5-4a32-a167-30dfeae746ff\") " pod="openshift-marketplace/marketplace-operator-79b997595-55fhb" Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.935779 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/059ebbdc-d9b5-4a32-a167-30dfeae746ff-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-55fhb\" (UID: \"059ebbdc-d9b5-4a32-a167-30dfeae746ff\") " pod="openshift-marketplace/marketplace-operator-79b997595-55fhb" Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.936838 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/059ebbdc-d9b5-4a32-a167-30dfeae746ff-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-55fhb\" (UID: \"059ebbdc-d9b5-4a32-a167-30dfeae746ff\") " pod="openshift-marketplace/marketplace-operator-79b997595-55fhb" Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.941729 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/059ebbdc-d9b5-4a32-a167-30dfeae746ff-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-55fhb\" (UID: \"059ebbdc-d9b5-4a32-a167-30dfeae746ff\") " pod="openshift-marketplace/marketplace-operator-79b997595-55fhb" Jan 30 21:49:58 crc kubenswrapper[4869]: I0130 21:49:58.950320 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh29m\" (UniqueName: \"kubernetes.io/projected/059ebbdc-d9b5-4a32-a167-30dfeae746ff-kube-api-access-vh29m\") pod \"marketplace-operator-79b997595-55fhb\" (UID: \"059ebbdc-d9b5-4a32-a167-30dfeae746ff\") " pod="openshift-marketplace/marketplace-operator-79b997595-55fhb" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.032778 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-55fhb" Jan 30 21:49:59 crc kubenswrapper[4869]: E0130 21:49:59.161448 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bdfe12b8ada59ecac25ece6b4064951aa03f2c9eae06860f9d608933bbe12a22 is running failed: container process not found" containerID="bdfe12b8ada59ecac25ece6b4064951aa03f2c9eae06860f9d608933bbe12a22" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 21:49:59 crc kubenswrapper[4869]: E0130 21:49:59.161877 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bdfe12b8ada59ecac25ece6b4064951aa03f2c9eae06860f9d608933bbe12a22 is running failed: container process not found" containerID="bdfe12b8ada59ecac25ece6b4064951aa03f2c9eae06860f9d608933bbe12a22" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 21:49:59 crc kubenswrapper[4869]: E0130 21:49:59.162232 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bdfe12b8ada59ecac25ece6b4064951aa03f2c9eae06860f9d608933bbe12a22 is running failed: container process not found" containerID="bdfe12b8ada59ecac25ece6b4064951aa03f2c9eae06860f9d608933bbe12a22" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 21:49:59 crc kubenswrapper[4869]: E0130 21:49:59.162280 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bdfe12b8ada59ecac25ece6b4064951aa03f2c9eae06860f9d608933bbe12a22 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-b2g6w" podUID="a79f57ed-ffe5-4f65-acd5-0bcd42e47a02" containerName="registry-server" Jan 30 21:49:59 crc kubenswrapper[4869]: E0130 21:49:59.349138 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5d6897835c78122de172fcbf0cf20ab7d1366470b8a90d6f577767f51c23b338 is running failed: container process not found" containerID="5d6897835c78122de172fcbf0cf20ab7d1366470b8a90d6f577767f51c23b338" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 21:49:59 crc kubenswrapper[4869]: E0130 21:49:59.350432 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5d6897835c78122de172fcbf0cf20ab7d1366470b8a90d6f577767f51c23b338 is running failed: container process not found" containerID="5d6897835c78122de172fcbf0cf20ab7d1366470b8a90d6f577767f51c23b338" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 21:49:59 crc kubenswrapper[4869]: E0130 21:49:59.350743 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5d6897835c78122de172fcbf0cf20ab7d1366470b8a90d6f577767f51c23b338 is running failed: container process not found" containerID="5d6897835c78122de172fcbf0cf20ab7d1366470b8a90d6f577767f51c23b338" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 21:49:59 crc kubenswrapper[4869]: E0130 21:49:59.350822 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5d6897835c78122de172fcbf0cf20ab7d1366470b8a90d6f577767f51c23b338 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-6v7j4" podUID="4ebfcb0e-58a5-4ab1-894f-1a6093921531" containerName="registry-server" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.480416 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-55fhb"] Jan 30 21:49:59 crc kubenswrapper[4869]: W0130 21:49:59.489343 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod059ebbdc_d9b5_4a32_a167_30dfeae746ff.slice/crio-ad1bd0fcded325af55af3bcb611af2b7f247d395968661bfef81ae2439f56ed7 WatchSource:0}: Error finding container ad1bd0fcded325af55af3bcb611af2b7f247d395968661bfef81ae2439f56ed7: Status 404 returned error can't find the container with id ad1bd0fcded325af55af3bcb611af2b7f247d395968661bfef81ae2439f56ed7 Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.573618 4869 generic.go:334] "Generic (PLEG): container finished" podID="b394f841-ea61-41c7-9b4b-7ad185073b70" containerID="2f1656b85d3ed469536399ca659885e08fcf346ea8c31f57fca6519a697b1b5d" exitCode=0 Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.573706 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr69r" event={"ID":"b394f841-ea61-41c7-9b4b-7ad185073b70","Type":"ContainerDied","Data":"2f1656b85d3ed469536399ca659885e08fcf346ea8c31f57fca6519a697b1b5d"} Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.577825 4869 generic.go:334] "Generic (PLEG): container finished" podID="4ebfcb0e-58a5-4ab1-894f-1a6093921531" containerID="5d6897835c78122de172fcbf0cf20ab7d1366470b8a90d6f577767f51c23b338" exitCode=0 Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.577942 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6v7j4" event={"ID":"4ebfcb0e-58a5-4ab1-894f-1a6093921531","Type":"ContainerDied","Data":"5d6897835c78122de172fcbf0cf20ab7d1366470b8a90d6f577767f51c23b338"} Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.581333 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-55fhb" event={"ID":"059ebbdc-d9b5-4a32-a167-30dfeae746ff","Type":"ContainerStarted","Data":"ad1bd0fcded325af55af3bcb611af2b7f247d395968661bfef81ae2439f56ed7"} Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.583302 4869 generic.go:334] "Generic (PLEG): container finished" podID="a79f57ed-ffe5-4f65-acd5-0bcd42e47a02" containerID="bdfe12b8ada59ecac25ece6b4064951aa03f2c9eae06860f9d608933bbe12a22" exitCode=0 Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.583348 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2g6w" event={"ID":"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02","Type":"ContainerDied","Data":"bdfe12b8ada59ecac25ece6b4064951aa03f2c9eae06860f9d608933bbe12a22"} Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.587167 4869 generic.go:334] "Generic (PLEG): container finished" podID="0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3" containerID="b40787daa337627e594426c009ef5978b05485802914ed1a3478a4acddd9e720" exitCode=0 Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.587275 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsp88" event={"ID":"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3","Type":"ContainerDied","Data":"b40787daa337627e594426c009ef5978b05485802914ed1a3478a4acddd9e720"} Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.588944 4869 generic.go:334] "Generic (PLEG): container finished" podID="fca2992a-2cb5-4b86-9cfe-66d8dae76acb" containerID="9766d8dfe52e4fbd53538cdf3bef77f8e1e3437d7f110649ae1e2c5861e301f1" exitCode=0 Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.588977 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" event={"ID":"fca2992a-2cb5-4b86-9cfe-66d8dae76acb","Type":"ContainerDied","Data":"9766d8dfe52e4fbd53538cdf3bef77f8e1e3437d7f110649ae1e2c5861e301f1"} Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.590022 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6v7j4" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.710252 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2g6w" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.724019 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rr69r" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.738818 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.749142 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ebfcb0e-58a5-4ab1-894f-1a6093921531-utilities\") pod \"4ebfcb0e-58a5-4ab1-894f-1a6093921531\" (UID: \"4ebfcb0e-58a5-4ab1-894f-1a6093921531\") " Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.749235 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2q9f\" (UniqueName: \"kubernetes.io/projected/4ebfcb0e-58a5-4ab1-894f-1a6093921531-kube-api-access-c2q9f\") pod \"4ebfcb0e-58a5-4ab1-894f-1a6093921531\" (UID: \"4ebfcb0e-58a5-4ab1-894f-1a6093921531\") " Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.749448 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ebfcb0e-58a5-4ab1-894f-1a6093921531-catalog-content\") pod \"4ebfcb0e-58a5-4ab1-894f-1a6093921531\" (UID: \"4ebfcb0e-58a5-4ab1-894f-1a6093921531\") " Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.750364 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ebfcb0e-58a5-4ab1-894f-1a6093921531-utilities" (OuterVolumeSpecName: "utilities") pod "4ebfcb0e-58a5-4ab1-894f-1a6093921531" (UID: "4ebfcb0e-58a5-4ab1-894f-1a6093921531"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.754607 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ebfcb0e-58a5-4ab1-894f-1a6093921531-kube-api-access-c2q9f" (OuterVolumeSpecName: "kube-api-access-c2q9f") pod "4ebfcb0e-58a5-4ab1-894f-1a6093921531" (UID: "4ebfcb0e-58a5-4ab1-894f-1a6093921531"). InnerVolumeSpecName "kube-api-access-c2q9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.777743 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsp88" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.805601 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ebfcb0e-58a5-4ab1-894f-1a6093921531-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ebfcb0e-58a5-4ab1-894f-1a6093921531" (UID: "4ebfcb0e-58a5-4ab1-894f-1a6093921531"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.852300 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b394f841-ea61-41c7-9b4b-7ad185073b70-catalog-content\") pod \"b394f841-ea61-41c7-9b4b-7ad185073b70\" (UID: \"b394f841-ea61-41c7-9b4b-7ad185073b70\") " Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.852707 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02-utilities\") pod \"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02\" (UID: \"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02\") " Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.852740 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7f2v\" (UniqueName: \"kubernetes.io/projected/fca2992a-2cb5-4b86-9cfe-66d8dae76acb-kube-api-access-p7f2v\") pod \"fca2992a-2cb5-4b86-9cfe-66d8dae76acb\" (UID: \"fca2992a-2cb5-4b86-9cfe-66d8dae76acb\") " Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.852774 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwcmb\" (UniqueName: \"kubernetes.io/projected/b394f841-ea61-41c7-9b4b-7ad185073b70-kube-api-access-lwcmb\") pod \"b394f841-ea61-41c7-9b4b-7ad185073b70\" (UID: \"b394f841-ea61-41c7-9b4b-7ad185073b70\") " Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.852816 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b394f841-ea61-41c7-9b4b-7ad185073b70-utilities\") pod \"b394f841-ea61-41c7-9b4b-7ad185073b70\" (UID: \"b394f841-ea61-41c7-9b4b-7ad185073b70\") " Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.852874 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02-catalog-content\") pod \"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02\" (UID: \"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02\") " Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.852940 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m2t8\" (UniqueName: \"kubernetes.io/projected/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02-kube-api-access-7m2t8\") pod \"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02\" (UID: \"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02\") " Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.853002 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fca2992a-2cb5-4b86-9cfe-66d8dae76acb-marketplace-trusted-ca\") pod \"fca2992a-2cb5-4b86-9cfe-66d8dae76acb\" (UID: \"fca2992a-2cb5-4b86-9cfe-66d8dae76acb\") " Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.853038 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fca2992a-2cb5-4b86-9cfe-66d8dae76acb-marketplace-operator-metrics\") pod \"fca2992a-2cb5-4b86-9cfe-66d8dae76acb\" (UID: \"fca2992a-2cb5-4b86-9cfe-66d8dae76acb\") " Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.853301 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ebfcb0e-58a5-4ab1-894f-1a6093921531-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.853318 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ebfcb0e-58a5-4ab1-894f-1a6093921531-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.853330 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2q9f\" (UniqueName: \"kubernetes.io/projected/4ebfcb0e-58a5-4ab1-894f-1a6093921531-kube-api-access-c2q9f\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.853610 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02-utilities" (OuterVolumeSpecName: "utilities") pod "a79f57ed-ffe5-4f65-acd5-0bcd42e47a02" (UID: "a79f57ed-ffe5-4f65-acd5-0bcd42e47a02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.853742 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b394f841-ea61-41c7-9b4b-7ad185073b70-utilities" (OuterVolumeSpecName: "utilities") pod "b394f841-ea61-41c7-9b4b-7ad185073b70" (UID: "b394f841-ea61-41c7-9b4b-7ad185073b70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.854502 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fca2992a-2cb5-4b86-9cfe-66d8dae76acb-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "fca2992a-2cb5-4b86-9cfe-66d8dae76acb" (UID: "fca2992a-2cb5-4b86-9cfe-66d8dae76acb"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.859023 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca2992a-2cb5-4b86-9cfe-66d8dae76acb-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "fca2992a-2cb5-4b86-9cfe-66d8dae76acb" (UID: "fca2992a-2cb5-4b86-9cfe-66d8dae76acb"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.882273 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca2992a-2cb5-4b86-9cfe-66d8dae76acb-kube-api-access-p7f2v" (OuterVolumeSpecName: "kube-api-access-p7f2v") pod "fca2992a-2cb5-4b86-9cfe-66d8dae76acb" (UID: "fca2992a-2cb5-4b86-9cfe-66d8dae76acb"). InnerVolumeSpecName "kube-api-access-p7f2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.882371 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b394f841-ea61-41c7-9b4b-7ad185073b70-kube-api-access-lwcmb" (OuterVolumeSpecName: "kube-api-access-lwcmb") pod "b394f841-ea61-41c7-9b4b-7ad185073b70" (UID: "b394f841-ea61-41c7-9b4b-7ad185073b70"). InnerVolumeSpecName "kube-api-access-lwcmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.894445 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02-kube-api-access-7m2t8" (OuterVolumeSpecName: "kube-api-access-7m2t8") pod "a79f57ed-ffe5-4f65-acd5-0bcd42e47a02" (UID: "a79f57ed-ffe5-4f65-acd5-0bcd42e47a02"). InnerVolumeSpecName "kube-api-access-7m2t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.952143 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b394f841-ea61-41c7-9b4b-7ad185073b70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b394f841-ea61-41c7-9b4b-7ad185073b70" (UID: "b394f841-ea61-41c7-9b4b-7ad185073b70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.953865 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3-catalog-content\") pod \"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3\" (UID: \"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3\") " Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.954005 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3-utilities\") pod \"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3\" (UID: \"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3\") " Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.954045 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj9dd\" (UniqueName: \"kubernetes.io/projected/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3-kube-api-access-xj9dd\") pod \"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3\" (UID: \"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3\") " Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.954543 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwcmb\" (UniqueName: \"kubernetes.io/projected/b394f841-ea61-41c7-9b4b-7ad185073b70-kube-api-access-lwcmb\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.954562 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b394f841-ea61-41c7-9b4b-7ad185073b70-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.954573 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m2t8\" (UniqueName: \"kubernetes.io/projected/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02-kube-api-access-7m2t8\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.954583 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fca2992a-2cb5-4b86-9cfe-66d8dae76acb-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.954592 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fca2992a-2cb5-4b86-9cfe-66d8dae76acb-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.954601 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b394f841-ea61-41c7-9b4b-7ad185073b70-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.954610 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.954618 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7f2v\" (UniqueName: \"kubernetes.io/projected/fca2992a-2cb5-4b86-9cfe-66d8dae76acb-kube-api-access-p7f2v\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.956103 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3-utilities" (OuterVolumeSpecName: "utilities") pod "0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3" (UID: "0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.976226 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a79f57ed-ffe5-4f65-acd5-0bcd42e47a02" (UID: "a79f57ed-ffe5-4f65-acd5-0bcd42e47a02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:49:59 crc kubenswrapper[4869]: I0130 21:49:59.984515 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3-kube-api-access-xj9dd" (OuterVolumeSpecName: "kube-api-access-xj9dd") pod "0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3" (UID: "0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3"). InnerVolumeSpecName "kube-api-access-xj9dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.056034 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj9dd\" (UniqueName: \"kubernetes.io/projected/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3-kube-api-access-xj9dd\") on node \"crc\" DevicePath \"\"" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.056361 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.056456 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.110736 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3" (UID: "0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.157887 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.595026 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.595027 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b86ql" event={"ID":"fca2992a-2cb5-4b86-9cfe-66d8dae76acb","Type":"ContainerDied","Data":"888bf27858c12005f23724ca3347569a893d95cd06241988a5bd02bc3418537f"} Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.595501 4869 scope.go:117] "RemoveContainer" containerID="9766d8dfe52e4fbd53538cdf3bef77f8e1e3437d7f110649ae1e2c5861e301f1" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.597170 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6v7j4" event={"ID":"4ebfcb0e-58a5-4ab1-894f-1a6093921531","Type":"ContainerDied","Data":"4d4e4c8b7475355f0d485adf4c9eea933864e2df61cdd8774f504cc426c3bb3a"} Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.597319 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6v7j4" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.601545 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr69r" event={"ID":"b394f841-ea61-41c7-9b4b-7ad185073b70","Type":"ContainerDied","Data":"04a4be37398446f96aa39027422ba11ed4e3b6d9aa04fc7c9944012408725b2b"} Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.602430 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rr69r" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.608122 4869 scope.go:117] "RemoveContainer" containerID="5d6897835c78122de172fcbf0cf20ab7d1366470b8a90d6f577767f51c23b338" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.608616 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-55fhb" event={"ID":"059ebbdc-d9b5-4a32-a167-30dfeae746ff","Type":"ContainerStarted","Data":"3a89b0cceaaace8f219eedcfd71cd46f20548d2368114a54ee638b006c907397"} Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.609022 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-55fhb" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.613561 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2g6w" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.614110 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2g6w" event={"ID":"a79f57ed-ffe5-4f65-acd5-0bcd42e47a02","Type":"ContainerDied","Data":"971ded5e14869f2b606b95e3eca85789d95c5aad0ed0ad345f21e4536da82905"} Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.618098 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-55fhb" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.619785 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsp88" event={"ID":"0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3","Type":"ContainerDied","Data":"98700267ce47feed373823b3bdfe9409b839c62aaf80fce9d33af523817eabd2"} Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.619973 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsp88" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.626813 4869 scope.go:117] "RemoveContainer" containerID="69dc1ca55f3d35454428f0e4223daf0a6b3939cce5305132a6aa157e59d7b006" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.627094 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b86ql"] Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.630044 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b86ql"] Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.641281 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6v7j4"] Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.647382 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6v7j4"] Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.652267 4869 scope.go:117] "RemoveContainer" containerID="8af38011d1e1fc94050eede5dcd98e753fe3a170e05b3363de5ffd21158228f3" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.658749 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rr69r"] Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.665714 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rr69r"] Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.670485 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tsp88"] Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.674409 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tsp88"] Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.681778 4869 scope.go:117] "RemoveContainer" containerID="2f1656b85d3ed469536399ca659885e08fcf346ea8c31f57fca6519a697b1b5d" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.693172 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-55fhb" podStartSLOduration=2.690574685 podStartE2EDuration="2.690574685s" podCreationTimestamp="2026-01-30 21:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:50:00.686513707 +0000 UTC m=+401.572271742" watchObservedRunningTime="2026-01-30 21:50:00.690574685 +0000 UTC m=+401.576332710" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.714775 4869 scope.go:117] "RemoveContainer" containerID="73cd7cddb6afbd0a6ee910279ea8c2bb889488e711d0ac2d0e11e5803001d552" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.718741 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b2g6w"] Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.722284 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b2g6w"] Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.730922 4869 scope.go:117] "RemoveContainer" containerID="ce83adaaaac4cdc0b56959703785f8f7ae028e2bda9f5ae42e3667822932d150" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.755369 4869 scope.go:117] "RemoveContainer" containerID="bdfe12b8ada59ecac25ece6b4064951aa03f2c9eae06860f9d608933bbe12a22" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.768732 4869 scope.go:117] "RemoveContainer" containerID="7b7cc6d9ef18417e30d7694b50fd9d1c24239efd81fa3699f2bc8008c3495b4b" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.785345 4869 scope.go:117] "RemoveContainer" containerID="3acc01a17eef1a4e05360ce28fd5d642e8521042e0052ea8fa9a9a5b5c342794" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.815103 4869 scope.go:117] "RemoveContainer" containerID="b40787daa337627e594426c009ef5978b05485802914ed1a3478a4acddd9e720" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.830125 4869 scope.go:117] "RemoveContainer" containerID="5e248ca8eda44eeaec38c679eb2ad84bb6907c68976fe46ee3408b724d83f75c" Jan 30 21:50:00 crc kubenswrapper[4869]: I0130 21:50:00.845417 4869 scope.go:117] "RemoveContainer" containerID="421a3316b5543bc753352ad25fa04eba21113597087cd9d0f98ce1ea86b2c4de" Jan 30 21:50:01 crc kubenswrapper[4869]: I0130 21:50:01.888995 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3" path="/var/lib/kubelet/pods/0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3/volumes" Jan 30 21:50:01 crc kubenswrapper[4869]: I0130 21:50:01.889918 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ebfcb0e-58a5-4ab1-894f-1a6093921531" path="/var/lib/kubelet/pods/4ebfcb0e-58a5-4ab1-894f-1a6093921531/volumes" Jan 30 21:50:01 crc kubenswrapper[4869]: I0130 21:50:01.890504 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a79f57ed-ffe5-4f65-acd5-0bcd42e47a02" path="/var/lib/kubelet/pods/a79f57ed-ffe5-4f65-acd5-0bcd42e47a02/volumes" Jan 30 21:50:01 crc kubenswrapper[4869]: I0130 21:50:01.891453 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b394f841-ea61-41c7-9b4b-7ad185073b70" path="/var/lib/kubelet/pods/b394f841-ea61-41c7-9b4b-7ad185073b70/volumes" Jan 30 21:50:01 crc kubenswrapper[4869]: I0130 21:50:01.892081 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca2992a-2cb5-4b86-9cfe-66d8dae76acb" path="/var/lib/kubelet/pods/fca2992a-2cb5-4b86-9cfe-66d8dae76acb/volumes" Jan 30 21:50:01 crc kubenswrapper[4869]: I0130 21:50:01.990140 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:50:01 crc kubenswrapper[4869]: I0130 21:50:01.990192 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883214 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-djhl2"] Jan 30 21:50:02 crc kubenswrapper[4869]: E0130 21:50:02.883466 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a79f57ed-ffe5-4f65-acd5-0bcd42e47a02" containerName="extract-content" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883481 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79f57ed-ffe5-4f65-acd5-0bcd42e47a02" containerName="extract-content" Jan 30 21:50:02 crc kubenswrapper[4869]: E0130 21:50:02.883491 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b394f841-ea61-41c7-9b4b-7ad185073b70" containerName="extract-utilities" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883500 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b394f841-ea61-41c7-9b4b-7ad185073b70" containerName="extract-utilities" Jan 30 21:50:02 crc kubenswrapper[4869]: E0130 21:50:02.883512 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a79f57ed-ffe5-4f65-acd5-0bcd42e47a02" containerName="extract-utilities" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883519 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79f57ed-ffe5-4f65-acd5-0bcd42e47a02" containerName="extract-utilities" Jan 30 21:50:02 crc kubenswrapper[4869]: E0130 21:50:02.883531 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebfcb0e-58a5-4ab1-894f-1a6093921531" containerName="extract-content" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883538 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebfcb0e-58a5-4ab1-894f-1a6093921531" containerName="extract-content" Jan 30 21:50:02 crc kubenswrapper[4869]: E0130 21:50:02.883547 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b394f841-ea61-41c7-9b4b-7ad185073b70" containerName="extract-content" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883554 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b394f841-ea61-41c7-9b4b-7ad185073b70" containerName="extract-content" Jan 30 21:50:02 crc kubenswrapper[4869]: E0130 21:50:02.883564 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b394f841-ea61-41c7-9b4b-7ad185073b70" containerName="registry-server" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883573 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b394f841-ea61-41c7-9b4b-7ad185073b70" containerName="registry-server" Jan 30 21:50:02 crc kubenswrapper[4869]: E0130 21:50:02.883583 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3" containerName="extract-content" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883590 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3" containerName="extract-content" Jan 30 21:50:02 crc kubenswrapper[4869]: E0130 21:50:02.883599 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca2992a-2cb5-4b86-9cfe-66d8dae76acb" containerName="marketplace-operator" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883607 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca2992a-2cb5-4b86-9cfe-66d8dae76acb" containerName="marketplace-operator" Jan 30 21:50:02 crc kubenswrapper[4869]: E0130 21:50:02.883617 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3" containerName="registry-server" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883625 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3" containerName="registry-server" Jan 30 21:50:02 crc kubenswrapper[4869]: E0130 21:50:02.883636 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3" containerName="extract-utilities" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883645 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3" containerName="extract-utilities" Jan 30 21:50:02 crc kubenswrapper[4869]: E0130 21:50:02.883655 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a79f57ed-ffe5-4f65-acd5-0bcd42e47a02" containerName="registry-server" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883663 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79f57ed-ffe5-4f65-acd5-0bcd42e47a02" containerName="registry-server" Jan 30 21:50:02 crc kubenswrapper[4869]: E0130 21:50:02.883679 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebfcb0e-58a5-4ab1-894f-1a6093921531" containerName="registry-server" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883686 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebfcb0e-58a5-4ab1-894f-1a6093921531" containerName="registry-server" Jan 30 21:50:02 crc kubenswrapper[4869]: E0130 21:50:02.883697 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebfcb0e-58a5-4ab1-894f-1a6093921531" containerName="extract-utilities" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883706 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebfcb0e-58a5-4ab1-894f-1a6093921531" containerName="extract-utilities" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883854 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ebfcb0e-58a5-4ab1-894f-1a6093921531" containerName="registry-server" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883951 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca2992a-2cb5-4b86-9cfe-66d8dae76acb" containerName="marketplace-operator" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883965 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a79f57ed-ffe5-4f65-acd5-0bcd42e47a02" containerName="registry-server" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883975 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d2f572b-c0d3-4479-aaa0-a4e210d5e8e3" containerName="registry-server" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.883984 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b394f841-ea61-41c7-9b4b-7ad185073b70" containerName="registry-server" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.884855 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-djhl2" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.889657 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.894234 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-djhl2"] Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.992724 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f702f9e-18d7-4559-8745-8b691886d766-utilities\") pod \"certified-operators-djhl2\" (UID: \"9f702f9e-18d7-4559-8745-8b691886d766\") " pod="openshift-marketplace/certified-operators-djhl2" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.992817 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvzc2\" (UniqueName: \"kubernetes.io/projected/9f702f9e-18d7-4559-8745-8b691886d766-kube-api-access-vvzc2\") pod \"certified-operators-djhl2\" (UID: \"9f702f9e-18d7-4559-8745-8b691886d766\") " pod="openshift-marketplace/certified-operators-djhl2" Jan 30 21:50:02 crc kubenswrapper[4869]: I0130 21:50:02.992986 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f702f9e-18d7-4559-8745-8b691886d766-catalog-content\") pod \"certified-operators-djhl2\" (UID: \"9f702f9e-18d7-4559-8745-8b691886d766\") " pod="openshift-marketplace/certified-operators-djhl2" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.078009 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wxw2l"] Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.079007 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxw2l" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.080874 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.088379 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxw2l"] Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.094585 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f702f9e-18d7-4559-8745-8b691886d766-utilities\") pod \"certified-operators-djhl2\" (UID: \"9f702f9e-18d7-4559-8745-8b691886d766\") " pod="openshift-marketplace/certified-operators-djhl2" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.094677 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvzc2\" (UniqueName: \"kubernetes.io/projected/9f702f9e-18d7-4559-8745-8b691886d766-kube-api-access-vvzc2\") pod \"certified-operators-djhl2\" (UID: \"9f702f9e-18d7-4559-8745-8b691886d766\") " pod="openshift-marketplace/certified-operators-djhl2" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.094761 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f702f9e-18d7-4559-8745-8b691886d766-catalog-content\") pod \"certified-operators-djhl2\" (UID: \"9f702f9e-18d7-4559-8745-8b691886d766\") " pod="openshift-marketplace/certified-operators-djhl2" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.095287 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f702f9e-18d7-4559-8745-8b691886d766-utilities\") pod \"certified-operators-djhl2\" (UID: \"9f702f9e-18d7-4559-8745-8b691886d766\") " pod="openshift-marketplace/certified-operators-djhl2" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.095557 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f702f9e-18d7-4559-8745-8b691886d766-catalog-content\") pod \"certified-operators-djhl2\" (UID: \"9f702f9e-18d7-4559-8745-8b691886d766\") " pod="openshift-marketplace/certified-operators-djhl2" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.126204 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvzc2\" (UniqueName: \"kubernetes.io/projected/9f702f9e-18d7-4559-8745-8b691886d766-kube-api-access-vvzc2\") pod \"certified-operators-djhl2\" (UID: \"9f702f9e-18d7-4559-8745-8b691886d766\") " pod="openshift-marketplace/certified-operators-djhl2" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.196250 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2326e182-60d7-4af7-8845-8e688d90b0a1-catalog-content\") pod \"community-operators-wxw2l\" (UID: \"2326e182-60d7-4af7-8845-8e688d90b0a1\") " pod="openshift-marketplace/community-operators-wxw2l" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.196321 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2326e182-60d7-4af7-8845-8e688d90b0a1-utilities\") pod \"community-operators-wxw2l\" (UID: \"2326e182-60d7-4af7-8845-8e688d90b0a1\") " pod="openshift-marketplace/community-operators-wxw2l" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.196352 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf269\" (UniqueName: \"kubernetes.io/projected/2326e182-60d7-4af7-8845-8e688d90b0a1-kube-api-access-kf269\") pod \"community-operators-wxw2l\" (UID: \"2326e182-60d7-4af7-8845-8e688d90b0a1\") " pod="openshift-marketplace/community-operators-wxw2l" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.202420 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-djhl2" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.298136 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf269\" (UniqueName: \"kubernetes.io/projected/2326e182-60d7-4af7-8845-8e688d90b0a1-kube-api-access-kf269\") pod \"community-operators-wxw2l\" (UID: \"2326e182-60d7-4af7-8845-8e688d90b0a1\") " pod="openshift-marketplace/community-operators-wxw2l" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.298261 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2326e182-60d7-4af7-8845-8e688d90b0a1-catalog-content\") pod \"community-operators-wxw2l\" (UID: \"2326e182-60d7-4af7-8845-8e688d90b0a1\") " pod="openshift-marketplace/community-operators-wxw2l" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.298337 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2326e182-60d7-4af7-8845-8e688d90b0a1-utilities\") pod \"community-operators-wxw2l\" (UID: \"2326e182-60d7-4af7-8845-8e688d90b0a1\") " pod="openshift-marketplace/community-operators-wxw2l" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.299040 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2326e182-60d7-4af7-8845-8e688d90b0a1-catalog-content\") pod \"community-operators-wxw2l\" (UID: \"2326e182-60d7-4af7-8845-8e688d90b0a1\") " pod="openshift-marketplace/community-operators-wxw2l" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.299181 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2326e182-60d7-4af7-8845-8e688d90b0a1-utilities\") pod \"community-operators-wxw2l\" (UID: \"2326e182-60d7-4af7-8845-8e688d90b0a1\") " pod="openshift-marketplace/community-operators-wxw2l" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.319964 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf269\" (UniqueName: \"kubernetes.io/projected/2326e182-60d7-4af7-8845-8e688d90b0a1-kube-api-access-kf269\") pod \"community-operators-wxw2l\" (UID: \"2326e182-60d7-4af7-8845-8e688d90b0a1\") " pod="openshift-marketplace/community-operators-wxw2l" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.395443 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxw2l" Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.610605 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-djhl2"] Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.641197 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-djhl2" event={"ID":"9f702f9e-18d7-4559-8745-8b691886d766","Type":"ContainerStarted","Data":"d98f90d92b34a5a853354e77aa07e2ec5c9334071d16d37cea83caf4c2377c80"} Jan 30 21:50:03 crc kubenswrapper[4869]: I0130 21:50:03.798498 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxw2l"] Jan 30 21:50:04 crc kubenswrapper[4869]: I0130 21:50:04.647357 4869 generic.go:334] "Generic (PLEG): container finished" podID="9f702f9e-18d7-4559-8745-8b691886d766" containerID="69590e3685df93916f308566ec256aea76770a9e33d86dad187d6496343dd347" exitCode=0 Jan 30 21:50:04 crc kubenswrapper[4869]: I0130 21:50:04.647450 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-djhl2" event={"ID":"9f702f9e-18d7-4559-8745-8b691886d766","Type":"ContainerDied","Data":"69590e3685df93916f308566ec256aea76770a9e33d86dad187d6496343dd347"} Jan 30 21:50:04 crc kubenswrapper[4869]: I0130 21:50:04.648862 4869 generic.go:334] "Generic (PLEG): container finished" podID="2326e182-60d7-4af7-8845-8e688d90b0a1" containerID="6fd395dc2756142ca79776f5fad7433d157ac43b0d0a349316d872d4636f1267" exitCode=0 Jan 30 21:50:04 crc kubenswrapper[4869]: I0130 21:50:04.648929 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxw2l" event={"ID":"2326e182-60d7-4af7-8845-8e688d90b0a1","Type":"ContainerDied","Data":"6fd395dc2756142ca79776f5fad7433d157ac43b0d0a349316d872d4636f1267"} Jan 30 21:50:04 crc kubenswrapper[4869]: I0130 21:50:04.648956 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxw2l" event={"ID":"2326e182-60d7-4af7-8845-8e688d90b0a1","Type":"ContainerStarted","Data":"1701c359d96793150da1b2e6e8879de4c9ed7eb506b147453942d33f3da26a43"} Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.278863 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h9l9x"] Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.282718 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h9l9x" Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.295163 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.299126 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h9l9x"] Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.425659 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnczv\" (UniqueName: \"kubernetes.io/projected/dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae-kube-api-access-lnczv\") pod \"redhat-marketplace-h9l9x\" (UID: \"dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae\") " pod="openshift-marketplace/redhat-marketplace-h9l9x" Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.425765 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae-utilities\") pod \"redhat-marketplace-h9l9x\" (UID: \"dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae\") " pod="openshift-marketplace/redhat-marketplace-h9l9x" Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.425821 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae-catalog-content\") pod \"redhat-marketplace-h9l9x\" (UID: \"dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae\") " pod="openshift-marketplace/redhat-marketplace-h9l9x" Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.479860 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9n2gh"] Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.481116 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9n2gh" Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.484550 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.491720 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9n2gh"] Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.526907 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae-catalog-content\") pod \"redhat-marketplace-h9l9x\" (UID: \"dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae\") " pod="openshift-marketplace/redhat-marketplace-h9l9x" Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.526949 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnczv\" (UniqueName: \"kubernetes.io/projected/dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae-kube-api-access-lnczv\") pod \"redhat-marketplace-h9l9x\" (UID: \"dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae\") " pod="openshift-marketplace/redhat-marketplace-h9l9x" Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.526987 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae-utilities\") pod \"redhat-marketplace-h9l9x\" (UID: \"dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae\") " pod="openshift-marketplace/redhat-marketplace-h9l9x" Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.527814 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae-utilities\") pod \"redhat-marketplace-h9l9x\" (UID: \"dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae\") " pod="openshift-marketplace/redhat-marketplace-h9l9x" Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.527858 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae-catalog-content\") pod \"redhat-marketplace-h9l9x\" (UID: \"dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae\") " pod="openshift-marketplace/redhat-marketplace-h9l9x" Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.548961 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnczv\" (UniqueName: \"kubernetes.io/projected/dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae-kube-api-access-lnczv\") pod \"redhat-marketplace-h9l9x\" (UID: \"dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae\") " pod="openshift-marketplace/redhat-marketplace-h9l9x" Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.609133 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h9l9x" Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.627700 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6abcba63-fc26-470d-b5bb-1a9e084cb65f-catalog-content\") pod \"redhat-operators-9n2gh\" (UID: \"6abcba63-fc26-470d-b5bb-1a9e084cb65f\") " pod="openshift-marketplace/redhat-operators-9n2gh" Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.627940 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6abcba63-fc26-470d-b5bb-1a9e084cb65f-utilities\") pod \"redhat-operators-9n2gh\" (UID: \"6abcba63-fc26-470d-b5bb-1a9e084cb65f\") " pod="openshift-marketplace/redhat-operators-9n2gh" Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.628001 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7k6d\" (UniqueName: \"kubernetes.io/projected/6abcba63-fc26-470d-b5bb-1a9e084cb65f-kube-api-access-h7k6d\") pod \"redhat-operators-9n2gh\" (UID: \"6abcba63-fc26-470d-b5bb-1a9e084cb65f\") " pod="openshift-marketplace/redhat-operators-9n2gh" Jan 30 21:50:05 crc kubenswrapper[4869]: I0130 21:50:05.656253 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxw2l" event={"ID":"2326e182-60d7-4af7-8845-8e688d90b0a1","Type":"ContainerStarted","Data":"edc1d72673c2f77c25b474aa31b3dec67f5b41f3165e5e76cc892532d493bd48"} Jan 30 21:50:06 crc kubenswrapper[4869]: I0130 21:50:05.729790 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6abcba63-fc26-470d-b5bb-1a9e084cb65f-catalog-content\") pod \"redhat-operators-9n2gh\" (UID: \"6abcba63-fc26-470d-b5bb-1a9e084cb65f\") " pod="openshift-marketplace/redhat-operators-9n2gh" Jan 30 21:50:06 crc kubenswrapper[4869]: I0130 21:50:05.730063 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6abcba63-fc26-470d-b5bb-1a9e084cb65f-utilities\") pod \"redhat-operators-9n2gh\" (UID: \"6abcba63-fc26-470d-b5bb-1a9e084cb65f\") " pod="openshift-marketplace/redhat-operators-9n2gh" Jan 30 21:50:06 crc kubenswrapper[4869]: I0130 21:50:05.730080 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7k6d\" (UniqueName: \"kubernetes.io/projected/6abcba63-fc26-470d-b5bb-1a9e084cb65f-kube-api-access-h7k6d\") pod \"redhat-operators-9n2gh\" (UID: \"6abcba63-fc26-470d-b5bb-1a9e084cb65f\") " pod="openshift-marketplace/redhat-operators-9n2gh" Jan 30 21:50:06 crc kubenswrapper[4869]: I0130 21:50:05.730361 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6abcba63-fc26-470d-b5bb-1a9e084cb65f-catalog-content\") pod \"redhat-operators-9n2gh\" (UID: \"6abcba63-fc26-470d-b5bb-1a9e084cb65f\") " pod="openshift-marketplace/redhat-operators-9n2gh" Jan 30 21:50:06 crc kubenswrapper[4869]: I0130 21:50:05.730414 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6abcba63-fc26-470d-b5bb-1a9e084cb65f-utilities\") pod \"redhat-operators-9n2gh\" (UID: \"6abcba63-fc26-470d-b5bb-1a9e084cb65f\") " pod="openshift-marketplace/redhat-operators-9n2gh" Jan 30 21:50:06 crc kubenswrapper[4869]: I0130 21:50:05.748845 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7k6d\" (UniqueName: \"kubernetes.io/projected/6abcba63-fc26-470d-b5bb-1a9e084cb65f-kube-api-access-h7k6d\") pod \"redhat-operators-9n2gh\" (UID: \"6abcba63-fc26-470d-b5bb-1a9e084cb65f\") " pod="openshift-marketplace/redhat-operators-9n2gh" Jan 30 21:50:06 crc kubenswrapper[4869]: I0130 21:50:05.820604 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9n2gh" Jan 30 21:50:06 crc kubenswrapper[4869]: I0130 21:50:06.663561 4869 generic.go:334] "Generic (PLEG): container finished" podID="9f702f9e-18d7-4559-8745-8b691886d766" containerID="d63f0a8b1772166c12151715b94a5e0a5aacca9dc29d294af775b7bccf5ab35b" exitCode=0 Jan 30 21:50:06 crc kubenswrapper[4869]: I0130 21:50:06.663702 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-djhl2" event={"ID":"9f702f9e-18d7-4559-8745-8b691886d766","Type":"ContainerDied","Data":"d63f0a8b1772166c12151715b94a5e0a5aacca9dc29d294af775b7bccf5ab35b"} Jan 30 21:50:06 crc kubenswrapper[4869]: I0130 21:50:06.680701 4869 generic.go:334] "Generic (PLEG): container finished" podID="2326e182-60d7-4af7-8845-8e688d90b0a1" containerID="edc1d72673c2f77c25b474aa31b3dec67f5b41f3165e5e76cc892532d493bd48" exitCode=0 Jan 30 21:50:06 crc kubenswrapper[4869]: I0130 21:50:06.680764 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxw2l" event={"ID":"2326e182-60d7-4af7-8845-8e688d90b0a1","Type":"ContainerDied","Data":"edc1d72673c2f77c25b474aa31b3dec67f5b41f3165e5e76cc892532d493bd48"} Jan 30 21:50:06 crc kubenswrapper[4869]: I0130 21:50:06.734166 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9n2gh"] Jan 30 21:50:06 crc kubenswrapper[4869]: W0130 21:50:06.741149 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6abcba63_fc26_470d_b5bb_1a9e084cb65f.slice/crio-391b8dbc91ab0f25750f997b6bc17d8a0caf28fc055a32de13329dce8d14f8bb WatchSource:0}: Error finding container 391b8dbc91ab0f25750f997b6bc17d8a0caf28fc055a32de13329dce8d14f8bb: Status 404 returned error can't find the container with id 391b8dbc91ab0f25750f997b6bc17d8a0caf28fc055a32de13329dce8d14f8bb Jan 30 21:50:06 crc kubenswrapper[4869]: I0130 21:50:06.751954 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h9l9x"] Jan 30 21:50:06 crc kubenswrapper[4869]: W0130 21:50:06.761035 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd5e41fc_1989_4b7d_b1ca_195a9ce9b0ae.slice/crio-87731ad9f5f39f3c0241ebe322c656fbfc6fadebce608d1d0ca045d62e91a2b4 WatchSource:0}: Error finding container 87731ad9f5f39f3c0241ebe322c656fbfc6fadebce608d1d0ca045d62e91a2b4: Status 404 returned error can't find the container with id 87731ad9f5f39f3c0241ebe322c656fbfc6fadebce608d1d0ca045d62e91a2b4 Jan 30 21:50:07 crc kubenswrapper[4869]: I0130 21:50:07.686010 4869 generic.go:334] "Generic (PLEG): container finished" podID="6abcba63-fc26-470d-b5bb-1a9e084cb65f" containerID="9bcbec0650647a5078d7ad8df6b1e4477f0331d587eecdfab0affdd146e17030" exitCode=0 Jan 30 21:50:07 crc kubenswrapper[4869]: I0130 21:50:07.686176 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9n2gh" event={"ID":"6abcba63-fc26-470d-b5bb-1a9e084cb65f","Type":"ContainerDied","Data":"9bcbec0650647a5078d7ad8df6b1e4477f0331d587eecdfab0affdd146e17030"} Jan 30 21:50:07 crc kubenswrapper[4869]: I0130 21:50:07.686562 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9n2gh" event={"ID":"6abcba63-fc26-470d-b5bb-1a9e084cb65f","Type":"ContainerStarted","Data":"391b8dbc91ab0f25750f997b6bc17d8a0caf28fc055a32de13329dce8d14f8bb"} Jan 30 21:50:07 crc kubenswrapper[4869]: I0130 21:50:07.689356 4869 generic.go:334] "Generic (PLEG): container finished" podID="dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae" containerID="7298e95549344713d237bd3c73fd830ea153b86fb2720c46a4e468e6d437f8e7" exitCode=0 Jan 30 21:50:07 crc kubenswrapper[4869]: I0130 21:50:07.689384 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h9l9x" event={"ID":"dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae","Type":"ContainerDied","Data":"7298e95549344713d237bd3c73fd830ea153b86fb2720c46a4e468e6d437f8e7"} Jan 30 21:50:07 crc kubenswrapper[4869]: I0130 21:50:07.689402 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h9l9x" event={"ID":"dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae","Type":"ContainerStarted","Data":"87731ad9f5f39f3c0241ebe322c656fbfc6fadebce608d1d0ca045d62e91a2b4"} Jan 30 21:50:08 crc kubenswrapper[4869]: I0130 21:50:08.696997 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-djhl2" event={"ID":"9f702f9e-18d7-4559-8745-8b691886d766","Type":"ContainerStarted","Data":"ae0504b8e99329e91a0d503493e580109fce595c68539f874437eb8bd19bbb09"} Jan 30 21:50:08 crc kubenswrapper[4869]: I0130 21:50:08.699145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxw2l" event={"ID":"2326e182-60d7-4af7-8845-8e688d90b0a1","Type":"ContainerStarted","Data":"f896b4beef6b3c32f74c42d39e99e189af1b56baf5f5af2b4265040c533c54c5"} Jan 30 21:50:08 crc kubenswrapper[4869]: I0130 21:50:08.717108 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-djhl2" podStartSLOduration=3.845538463 podStartE2EDuration="6.717093172s" podCreationTimestamp="2026-01-30 21:50:02 +0000 UTC" firstStartedPulling="2026-01-30 21:50:04.649939771 +0000 UTC m=+405.535697796" lastFinishedPulling="2026-01-30 21:50:07.52149447 +0000 UTC m=+408.407252505" observedRunningTime="2026-01-30 21:50:08.714625645 +0000 UTC m=+409.600383670" watchObservedRunningTime="2026-01-30 21:50:08.717093172 +0000 UTC m=+409.602851197" Jan 30 21:50:08 crc kubenswrapper[4869]: I0130 21:50:08.739208 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wxw2l" podStartSLOduration=2.434909389 podStartE2EDuration="5.739190977s" podCreationTimestamp="2026-01-30 21:50:03 +0000 UTC" firstStartedPulling="2026-01-30 21:50:04.650265201 +0000 UTC m=+405.536023226" lastFinishedPulling="2026-01-30 21:50:07.954546789 +0000 UTC m=+408.840304814" observedRunningTime="2026-01-30 21:50:08.73480905 +0000 UTC m=+409.620567075" watchObservedRunningTime="2026-01-30 21:50:08.739190977 +0000 UTC m=+409.624949002" Jan 30 21:50:10 crc kubenswrapper[4869]: I0130 21:50:10.712697 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9n2gh" event={"ID":"6abcba63-fc26-470d-b5bb-1a9e084cb65f","Type":"ContainerStarted","Data":"b882e7792e6140c7492343c75b44df745c6cab7be3ece80ae184986bb37a3bbb"} Jan 30 21:50:10 crc kubenswrapper[4869]: I0130 21:50:10.716370 4869 generic.go:334] "Generic (PLEG): container finished" podID="dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae" containerID="a0fa5b2ea6308c30c3e338df68321d70f3c46ab75b6516a3d1295c5980cf0fce" exitCode=0 Jan 30 21:50:10 crc kubenswrapper[4869]: I0130 21:50:10.716473 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h9l9x" event={"ID":"dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae","Type":"ContainerDied","Data":"a0fa5b2ea6308c30c3e338df68321d70f3c46ab75b6516a3d1295c5980cf0fce"} Jan 30 21:50:11 crc kubenswrapper[4869]: I0130 21:50:11.733033 4869 generic.go:334] "Generic (PLEG): container finished" podID="6abcba63-fc26-470d-b5bb-1a9e084cb65f" containerID="b882e7792e6140c7492343c75b44df745c6cab7be3ece80ae184986bb37a3bbb" exitCode=0 Jan 30 21:50:11 crc kubenswrapper[4869]: I0130 21:50:11.733120 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9n2gh" event={"ID":"6abcba63-fc26-470d-b5bb-1a9e084cb65f","Type":"ContainerDied","Data":"b882e7792e6140c7492343c75b44df745c6cab7be3ece80ae184986bb37a3bbb"} Jan 30 21:50:11 crc kubenswrapper[4869]: I0130 21:50:11.737085 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h9l9x" event={"ID":"dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae","Type":"ContainerStarted","Data":"fdee46d42842e7eef17cf5eb087cbdf4737e7c81ab32170c677bdb58540d041d"} Jan 30 21:50:11 crc kubenswrapper[4869]: I0130 21:50:11.773716 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h9l9x" podStartSLOduration=3.131668635 podStartE2EDuration="6.773695938s" podCreationTimestamp="2026-01-30 21:50:05 +0000 UTC" firstStartedPulling="2026-01-30 21:50:07.691030467 +0000 UTC m=+408.576788492" lastFinishedPulling="2026-01-30 21:50:11.33305777 +0000 UTC m=+412.218815795" observedRunningTime="2026-01-30 21:50:11.771027014 +0000 UTC m=+412.656785069" watchObservedRunningTime="2026-01-30 21:50:11.773695938 +0000 UTC m=+412.659453963" Jan 30 21:50:12 crc kubenswrapper[4869]: I0130 21:50:12.743969 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9n2gh" event={"ID":"6abcba63-fc26-470d-b5bb-1a9e084cb65f","Type":"ContainerStarted","Data":"1b7dddf0dee7a7ea21e004fc0a8e0a3b33d335f3af10fa2148ba1675e4ddf2f2"} Jan 30 21:50:12 crc kubenswrapper[4869]: I0130 21:50:12.764523 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9n2gh" podStartSLOduration=3.246573804 podStartE2EDuration="7.764506134s" podCreationTimestamp="2026-01-30 21:50:05 +0000 UTC" firstStartedPulling="2026-01-30 21:50:07.688102085 +0000 UTC m=+408.573860110" lastFinishedPulling="2026-01-30 21:50:12.206034415 +0000 UTC m=+413.091792440" observedRunningTime="2026-01-30 21:50:12.762471381 +0000 UTC m=+413.648229426" watchObservedRunningTime="2026-01-30 21:50:12.764506134 +0000 UTC m=+413.650264159" Jan 30 21:50:13 crc kubenswrapper[4869]: I0130 21:50:13.202987 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-djhl2" Jan 30 21:50:13 crc kubenswrapper[4869]: I0130 21:50:13.203937 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-djhl2" Jan 30 21:50:13 crc kubenswrapper[4869]: I0130 21:50:13.245519 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-djhl2" Jan 30 21:50:13 crc kubenswrapper[4869]: I0130 21:50:13.396444 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wxw2l" Jan 30 21:50:13 crc kubenswrapper[4869]: I0130 21:50:13.396640 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wxw2l" Jan 30 21:50:13 crc kubenswrapper[4869]: I0130 21:50:13.450518 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wxw2l" Jan 30 21:50:13 crc kubenswrapper[4869]: I0130 21:50:13.789961 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-djhl2" Jan 30 21:50:13 crc kubenswrapper[4869]: I0130 21:50:13.798945 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wxw2l" Jan 30 21:50:15 crc kubenswrapper[4869]: I0130 21:50:14.999985 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-vgcwp" Jan 30 21:50:15 crc kubenswrapper[4869]: I0130 21:50:15.089697 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j58b4"] Jan 30 21:50:15 crc kubenswrapper[4869]: I0130 21:50:15.609792 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h9l9x" Jan 30 21:50:15 crc kubenswrapper[4869]: I0130 21:50:15.610155 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h9l9x" Jan 30 21:50:15 crc kubenswrapper[4869]: I0130 21:50:15.654782 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h9l9x" Jan 30 21:50:15 crc kubenswrapper[4869]: I0130 21:50:15.821222 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9n2gh" Jan 30 21:50:15 crc kubenswrapper[4869]: I0130 21:50:15.821295 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9n2gh" Jan 30 21:50:16 crc kubenswrapper[4869]: I0130 21:50:16.855068 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9n2gh" podUID="6abcba63-fc26-470d-b5bb-1a9e084cb65f" containerName="registry-server" probeResult="failure" output=< Jan 30 21:50:16 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Jan 30 21:50:16 crc kubenswrapper[4869]: > Jan 30 21:50:25 crc kubenswrapper[4869]: I0130 21:50:25.647177 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h9l9x" Jan 30 21:50:25 crc kubenswrapper[4869]: I0130 21:50:25.854464 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9n2gh" Jan 30 21:50:25 crc kubenswrapper[4869]: I0130 21:50:25.891296 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9n2gh" Jan 30 21:50:31 crc kubenswrapper[4869]: I0130 21:50:31.991108 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:50:31 crc kubenswrapper[4869]: I0130 21:50:31.991488 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:50:31 crc kubenswrapper[4869]: I0130 21:50:31.991544 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 21:50:31 crc kubenswrapper[4869]: I0130 21:50:31.992403 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"973a5ef833744fae8722d2d7d547e46f64e5f09ddd2aedbd9671f0d4496e56c1"} pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:50:31 crc kubenswrapper[4869]: I0130 21:50:31.992462 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" containerID="cri-o://973a5ef833744fae8722d2d7d547e46f64e5f09ddd2aedbd9671f0d4496e56c1" gracePeriod=600 Jan 30 21:50:32 crc kubenswrapper[4869]: I0130 21:50:32.850033 4869 generic.go:334] "Generic (PLEG): container finished" podID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerID="973a5ef833744fae8722d2d7d547e46f64e5f09ddd2aedbd9671f0d4496e56c1" exitCode=0 Jan 30 21:50:32 crc kubenswrapper[4869]: I0130 21:50:32.850107 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" event={"ID":"b6fc0664-5e80-440d-a6e8-4189cdf5c500","Type":"ContainerDied","Data":"973a5ef833744fae8722d2d7d547e46f64e5f09ddd2aedbd9671f0d4496e56c1"} Jan 30 21:50:32 crc kubenswrapper[4869]: I0130 21:50:32.850383 4869 scope.go:117] "RemoveContainer" containerID="30fa528823e4fa19f4de380fa7c5ab78f40c763a6ea6624ba20e1a65a81980d2" Jan 30 21:50:33 crc kubenswrapper[4869]: I0130 21:50:33.859836 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" event={"ID":"b6fc0664-5e80-440d-a6e8-4189cdf5c500","Type":"ContainerStarted","Data":"496fc9807f9bce8a450851cd04a8d88568f861a83556f3656c812c3d38022119"} Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.131180 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" podUID="cff2bdad-4c6e-44bc-977a-376e09638df1" containerName="registry" containerID="cri-o://56e409bc26eee10fe3f1fcc90fb807f96dc79c2051a6f26a0a32c0e85ebee8fc" gracePeriod=30 Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.553548 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.576204 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cff2bdad-4c6e-44bc-977a-376e09638df1-installation-pull-secrets\") pod \"cff2bdad-4c6e-44bc-977a-376e09638df1\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.576264 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cff2bdad-4c6e-44bc-977a-376e09638df1-trusted-ca\") pod \"cff2bdad-4c6e-44bc-977a-376e09638df1\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.576497 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"cff2bdad-4c6e-44bc-977a-376e09638df1\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.576542 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cff2bdad-4c6e-44bc-977a-376e09638df1-registry-tls\") pod \"cff2bdad-4c6e-44bc-977a-376e09638df1\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.576560 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtlhv\" (UniqueName: \"kubernetes.io/projected/cff2bdad-4c6e-44bc-977a-376e09638df1-kube-api-access-wtlhv\") pod \"cff2bdad-4c6e-44bc-977a-376e09638df1\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.577014 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cff2bdad-4c6e-44bc-977a-376e09638df1-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "cff2bdad-4c6e-44bc-977a-376e09638df1" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.577176 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cff2bdad-4c6e-44bc-977a-376e09638df1-ca-trust-extracted\") pod \"cff2bdad-4c6e-44bc-977a-376e09638df1\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.577238 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cff2bdad-4c6e-44bc-977a-376e09638df1-registry-certificates\") pod \"cff2bdad-4c6e-44bc-977a-376e09638df1\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.577281 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cff2bdad-4c6e-44bc-977a-376e09638df1-bound-sa-token\") pod \"cff2bdad-4c6e-44bc-977a-376e09638df1\" (UID: \"cff2bdad-4c6e-44bc-977a-376e09638df1\") " Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.577521 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cff2bdad-4c6e-44bc-977a-376e09638df1-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.578185 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cff2bdad-4c6e-44bc-977a-376e09638df1-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "cff2bdad-4c6e-44bc-977a-376e09638df1" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.588270 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cff2bdad-4c6e-44bc-977a-376e09638df1-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "cff2bdad-4c6e-44bc-977a-376e09638df1" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.590516 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cff2bdad-4c6e-44bc-977a-376e09638df1-kube-api-access-wtlhv" (OuterVolumeSpecName: "kube-api-access-wtlhv") pod "cff2bdad-4c6e-44bc-977a-376e09638df1" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1"). InnerVolumeSpecName "kube-api-access-wtlhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.590586 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cff2bdad-4c6e-44bc-977a-376e09638df1-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "cff2bdad-4c6e-44bc-977a-376e09638df1" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.595266 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cff2bdad-4c6e-44bc-977a-376e09638df1-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "cff2bdad-4c6e-44bc-977a-376e09638df1" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.601054 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cff2bdad-4c6e-44bc-977a-376e09638df1-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "cff2bdad-4c6e-44bc-977a-376e09638df1" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.605371 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "cff2bdad-4c6e-44bc-977a-376e09638df1" (UID: "cff2bdad-4c6e-44bc-977a-376e09638df1"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.678860 4869 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cff2bdad-4c6e-44bc-977a-376e09638df1-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.678909 4869 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cff2bdad-4c6e-44bc-977a-376e09638df1-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.678920 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtlhv\" (UniqueName: \"kubernetes.io/projected/cff2bdad-4c6e-44bc-977a-376e09638df1-kube-api-access-wtlhv\") on node \"crc\" DevicePath \"\"" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.678928 4869 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cff2bdad-4c6e-44bc-977a-376e09638df1-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.678937 4869 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cff2bdad-4c6e-44bc-977a-376e09638df1-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.678947 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cff2bdad-4c6e-44bc-977a-376e09638df1-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.895140 4869 generic.go:334] "Generic (PLEG): container finished" podID="cff2bdad-4c6e-44bc-977a-376e09638df1" containerID="56e409bc26eee10fe3f1fcc90fb807f96dc79c2051a6f26a0a32c0e85ebee8fc" exitCode=0 Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.895193 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" event={"ID":"cff2bdad-4c6e-44bc-977a-376e09638df1","Type":"ContainerDied","Data":"56e409bc26eee10fe3f1fcc90fb807f96dc79c2051a6f26a0a32c0e85ebee8fc"} Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.895224 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" event={"ID":"cff2bdad-4c6e-44bc-977a-376e09638df1","Type":"ContainerDied","Data":"963affe4e9bdb0930a031924482bddc80eeff748caf3f306e00f7599d557468a"} Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.895244 4869 scope.go:117] "RemoveContainer" containerID="56e409bc26eee10fe3f1fcc90fb807f96dc79c2051a6f26a0a32c0e85ebee8fc" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.895358 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j58b4" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.923667 4869 scope.go:117] "RemoveContainer" containerID="56e409bc26eee10fe3f1fcc90fb807f96dc79c2051a6f26a0a32c0e85ebee8fc" Jan 30 21:50:40 crc kubenswrapper[4869]: E0130 21:50:40.924104 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56e409bc26eee10fe3f1fcc90fb807f96dc79c2051a6f26a0a32c0e85ebee8fc\": container with ID starting with 56e409bc26eee10fe3f1fcc90fb807f96dc79c2051a6f26a0a32c0e85ebee8fc not found: ID does not exist" containerID="56e409bc26eee10fe3f1fcc90fb807f96dc79c2051a6f26a0a32c0e85ebee8fc" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.924161 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e409bc26eee10fe3f1fcc90fb807f96dc79c2051a6f26a0a32c0e85ebee8fc"} err="failed to get container status \"56e409bc26eee10fe3f1fcc90fb807f96dc79c2051a6f26a0a32c0e85ebee8fc\": rpc error: code = NotFound desc = could not find container \"56e409bc26eee10fe3f1fcc90fb807f96dc79c2051a6f26a0a32c0e85ebee8fc\": container with ID starting with 56e409bc26eee10fe3f1fcc90fb807f96dc79c2051a6f26a0a32c0e85ebee8fc not found: ID does not exist" Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.926120 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j58b4"] Jan 30 21:50:40 crc kubenswrapper[4869]: I0130 21:50:40.933792 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j58b4"] Jan 30 21:50:41 crc kubenswrapper[4869]: I0130 21:50:41.883622 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cff2bdad-4c6e-44bc-977a-376e09638df1" path="/var/lib/kubelet/pods/cff2bdad-4c6e-44bc-977a-376e09638df1/volumes" Jan 30 21:53:01 crc kubenswrapper[4869]: I0130 21:53:01.990220 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:53:02 crc kubenswrapper[4869]: I0130 21:53:01.990764 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:53:31 crc kubenswrapper[4869]: I0130 21:53:31.990468 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:53:31 crc kubenswrapper[4869]: I0130 21:53:31.991088 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:54:01 crc kubenswrapper[4869]: I0130 21:54:01.990664 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:54:01 crc kubenswrapper[4869]: I0130 21:54:01.991273 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:54:01 crc kubenswrapper[4869]: I0130 21:54:01.991323 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 21:54:01 crc kubenswrapper[4869]: I0130 21:54:01.991872 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"496fc9807f9bce8a450851cd04a8d88568f861a83556f3656c812c3d38022119"} pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:54:01 crc kubenswrapper[4869]: I0130 21:54:01.991942 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" containerID="cri-o://496fc9807f9bce8a450851cd04a8d88568f861a83556f3656c812c3d38022119" gracePeriod=600 Jan 30 21:54:02 crc kubenswrapper[4869]: I0130 21:54:02.907389 4869 generic.go:334] "Generic (PLEG): container finished" podID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerID="496fc9807f9bce8a450851cd04a8d88568f861a83556f3656c812c3d38022119" exitCode=0 Jan 30 21:54:02 crc kubenswrapper[4869]: I0130 21:54:02.907454 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" event={"ID":"b6fc0664-5e80-440d-a6e8-4189cdf5c500","Type":"ContainerDied","Data":"496fc9807f9bce8a450851cd04a8d88568f861a83556f3656c812c3d38022119"} Jan 30 21:54:02 crc kubenswrapper[4869]: I0130 21:54:02.907692 4869 scope.go:117] "RemoveContainer" containerID="973a5ef833744fae8722d2d7d547e46f64e5f09ddd2aedbd9671f0d4496e56c1" Jan 30 21:54:03 crc kubenswrapper[4869]: I0130 21:54:03.915027 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" event={"ID":"b6fc0664-5e80-440d-a6e8-4189cdf5c500","Type":"ContainerStarted","Data":"a7ff76a93ea10c54b4c308e0ca79595cb658f2169be1db3ae81fc5f671455e21"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.087546 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-stqvf"] Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.088540 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovn-controller" containerID="cri-o://6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3" gracePeriod=30 Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.088955 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="sbdb" containerID="cri-o://e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e" gracePeriod=30 Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.089005 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="nbdb" containerID="cri-o://54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86" gracePeriod=30 Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.089056 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovn-acl-logging" containerID="cri-o://4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8" gracePeriod=30 Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.089061 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="kube-rbac-proxy-node" containerID="cri-o://2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5" gracePeriod=30 Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.089213 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="northd" containerID="cri-o://b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09" gracePeriod=30 Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.089281 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36" gracePeriod=30 Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.121726 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovnkube-controller" containerID="cri-o://9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f" gracePeriod=30 Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.370453 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovnkube-controller/3.log" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.373117 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovn-acl-logging/0.log" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.373576 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovn-controller/0.log" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.373963 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.410276 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tz8jn_dac3c503-e284-4df8-ae5e-0084a884e456/kube-multus/2.log" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.410755 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tz8jn_dac3c503-e284-4df8-ae5e-0084a884e456/kube-multus/1.log" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.410793 4869 generic.go:334] "Generic (PLEG): container finished" podID="dac3c503-e284-4df8-ae5e-0084a884e456" containerID="a1ac73f7852a42d020dcf55e34ad0e1f39e08d7fbc25fee2d0148150ba37264b" exitCode=2 Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.410848 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tz8jn" event={"ID":"dac3c503-e284-4df8-ae5e-0084a884e456","Type":"ContainerDied","Data":"a1ac73f7852a42d020dcf55e34ad0e1f39e08d7fbc25fee2d0148150ba37264b"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.410881 4869 scope.go:117] "RemoveContainer" containerID="09e707217ddad89be77f68915c4948c1bbc2e44066f16cce7e255a5a91c1e101" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.411412 4869 scope.go:117] "RemoveContainer" containerID="a1ac73f7852a42d020dcf55e34ad0e1f39e08d7fbc25fee2d0148150ba37264b" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425373 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-s96f5"] Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.425556 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425567 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.425578 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="nbdb" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425585 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="nbdb" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.425593 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovnkube-controller" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425598 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovnkube-controller" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.425606 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovn-controller" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425612 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovn-controller" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.425620 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="northd" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425627 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="northd" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.425634 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovn-acl-logging" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425640 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovn-acl-logging" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.425646 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovnkube-controller" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425652 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovnkube-controller" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.425659 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="sbdb" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425666 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="sbdb" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.425674 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovnkube-controller" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425681 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovnkube-controller" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.425691 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovnkube-controller" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425696 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovnkube-controller" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.425706 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="kube-rbac-proxy-node" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425713 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="kube-rbac-proxy-node" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.425722 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cff2bdad-4c6e-44bc-977a-376e09638df1" containerName="registry" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425728 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cff2bdad-4c6e-44bc-977a-376e09638df1" containerName="registry" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.425735 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="kubecfg-setup" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425741 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="kubecfg-setup" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425853 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovn-controller" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425868 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="northd" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425876 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovn-acl-logging" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425885 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovnkube-controller" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425912 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cff2bdad-4c6e-44bc-977a-376e09638df1" containerName="registry" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425929 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovnkube-controller" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425940 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="kube-rbac-proxy-node" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425950 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovnkube-controller" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425956 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="nbdb" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425964 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="sbdb" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.425972 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.426064 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovnkube-controller" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.426075 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovnkube-controller" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.426167 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovnkube-controller" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.426177 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerName="ovnkube-controller" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.426293 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovnkube-controller/3.log" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.428672 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.431579 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovn-acl-logging/0.log" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.432504 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-stqvf_c39d4fe5-06cd-4ea4-8336-bd481332c475/ovn-controller/0.log" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.433782 4869 generic.go:334] "Generic (PLEG): container finished" podID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerID="9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f" exitCode=0 Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.433816 4869 generic.go:334] "Generic (PLEG): container finished" podID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerID="e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e" exitCode=0 Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.433827 4869 generic.go:334] "Generic (PLEG): container finished" podID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerID="54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86" exitCode=0 Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.433835 4869 generic.go:334] "Generic (PLEG): container finished" podID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerID="b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09" exitCode=0 Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.433842 4869 generic.go:334] "Generic (PLEG): container finished" podID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerID="3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36" exitCode=0 Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.433850 4869 generic.go:334] "Generic (PLEG): container finished" podID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerID="2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5" exitCode=0 Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.433857 4869 generic.go:334] "Generic (PLEG): container finished" podID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerID="4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8" exitCode=143 Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.433867 4869 generic.go:334] "Generic (PLEG): container finished" podID="c39d4fe5-06cd-4ea4-8336-bd481332c475" containerID="6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3" exitCode=143 Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.433888 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerDied","Data":"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.433966 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerDied","Data":"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.433979 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerDied","Data":"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.433990 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerDied","Data":"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434000 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerDied","Data":"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434011 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerDied","Data":"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434023 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434035 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434042 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434049 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434056 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434063 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434070 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434077 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434084 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434091 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434101 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerDied","Data":"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434111 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434120 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434126 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434133 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434139 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434146 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434153 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434159 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434165 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434172 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434181 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerDied","Data":"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434193 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434201 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434208 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434214 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434220 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434232 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434239 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434246 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434252 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434260 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434269 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" event={"ID":"c39d4fe5-06cd-4ea4-8336-bd481332c475","Type":"ContainerDied","Data":"3bde4b41d25105d1ae8bde167dfae92a242e3e870901cd1a1cc1fc2bbdc235bb"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434280 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434290 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434297 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434303 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434311 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434318 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434324 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434331 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434338 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434346 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6"} Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.434375 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-stqvf" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.467926 4869 scope.go:117] "RemoveContainer" containerID="9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482018 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-env-overrides\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482063 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-etc-openvswitch\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482101 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-kubelet\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482133 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-run-netns\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482157 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-node-log\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482203 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-run-ovn\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482224 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-slash\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482242 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482273 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-node-log" (OuterVolumeSpecName: "node-log") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482316 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-slash" (OuterVolumeSpecName: "host-slash") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482295 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482303 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482255 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8w6z\" (UniqueName: \"kubernetes.io/projected/c39d4fe5-06cd-4ea4-8336-bd481332c475-kube-api-access-j8w6z\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482356 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482390 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-cni-netd\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482425 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482438 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-run-openvswitch\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482477 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-systemd-units\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482500 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482509 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovnkube-script-lib\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482516 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482522 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482538 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovn-node-metrics-cert\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482564 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-cni-bin\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482608 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-run-systemd\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482628 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-var-lib-openvswitch\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482652 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovnkube-config\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482671 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482703 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-log-socket\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482722 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-run-ovn-kubernetes\") pod \"c39d4fe5-06cd-4ea4-8336-bd481332c475\" (UID: \"c39d4fe5-06cd-4ea4-8336-bd481332c475\") " Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482786 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482791 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.482828 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.483117 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-log-socket" (OuterVolumeSpecName: "log-socket") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.483157 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.483234 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.483336 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.483357 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-etc-openvswitch\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.483389 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-node-log\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.483430 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.483467 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd94d\" (UniqueName: \"kubernetes.io/projected/1cf6f98d-ad9d-4c98-8649-5234231be9ad-kube-api-access-cd94d\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.483494 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-run-netns\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.483521 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-run-openvswitch\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.483548 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-cni-bin\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.483606 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1cf6f98d-ad9d-4c98-8649-5234231be9ad-ovn-node-metrics-cert\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.483625 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-log-socket\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.483643 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-run-ovn\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.483666 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-systemd-units\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.483757 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1cf6f98d-ad9d-4c98-8649-5234231be9ad-ovnkube-config\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.483815 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-slash\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.483869 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1cf6f98d-ad9d-4c98-8649-5234231be9ad-ovnkube-script-lib\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484023 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1cf6f98d-ad9d-4c98-8649-5234231be9ad-env-overrides\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484085 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-cni-netd\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484115 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-var-lib-openvswitch\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484140 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-kubelet\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484157 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-run-systemd\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484178 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-run-ovn-kubernetes\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484219 4869 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484230 4869 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484239 4869 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484248 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484257 4869 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484266 4869 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484275 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484284 4869 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484294 4869 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484303 4869 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484312 4869 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c39d4fe5-06cd-4ea4-8336-bd481332c475-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484321 4869 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484329 4869 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484338 4869 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484347 4869 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484355 4869 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.484390 4869 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.487614 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c39d4fe5-06cd-4ea4-8336-bd481332c475-kube-api-access-j8w6z" (OuterVolumeSpecName: "kube-api-access-j8w6z") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "kube-api-access-j8w6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.487793 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.496752 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c39d4fe5-06cd-4ea4-8336-bd481332c475" (UID: "c39d4fe5-06cd-4ea4-8336-bd481332c475"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.497818 4869 scope.go:117] "RemoveContainer" containerID="92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.512949 4869 scope.go:117] "RemoveContainer" containerID="e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.523784 4869 scope.go:117] "RemoveContainer" containerID="54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.536227 4869 scope.go:117] "RemoveContainer" containerID="b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.547461 4869 scope.go:117] "RemoveContainer" containerID="3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.562162 4869 scope.go:117] "RemoveContainer" containerID="2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.573174 4869 scope.go:117] "RemoveContainer" containerID="4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.584922 4869 scope.go:117] "RemoveContainer" containerID="6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585296 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd94d\" (UniqueName: \"kubernetes.io/projected/1cf6f98d-ad9d-4c98-8649-5234231be9ad-kube-api-access-cd94d\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585361 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-run-netns\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585395 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-run-openvswitch\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585416 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-cni-bin\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585440 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1cf6f98d-ad9d-4c98-8649-5234231be9ad-ovn-node-metrics-cert\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585468 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-log-socket\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585479 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-run-netns\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585478 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-run-openvswitch\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585497 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-run-ovn\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585523 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-log-socket\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585530 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-systemd-units\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585550 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1cf6f98d-ad9d-4c98-8649-5234231be9ad-ovnkube-config\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585554 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-run-ovn\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585569 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-slash\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585564 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-cni-bin\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585580 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-systemd-units\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585585 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1cf6f98d-ad9d-4c98-8649-5234231be9ad-ovnkube-script-lib\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585663 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-slash\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585697 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1cf6f98d-ad9d-4c98-8649-5234231be9ad-env-overrides\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585735 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-cni-netd\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585766 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-var-lib-openvswitch\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585789 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-kubelet\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585804 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-run-systemd\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585826 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-run-ovn-kubernetes\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585879 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-etc-openvswitch\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585913 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-node-log\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.585940 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.586007 4869 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c39d4fe5-06cd-4ea4-8336-bd481332c475-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.586020 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8w6z\" (UniqueName: \"kubernetes.io/projected/c39d4fe5-06cd-4ea4-8336-bd481332c475-kube-api-access-j8w6z\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.586031 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c39d4fe5-06cd-4ea4-8336-bd481332c475-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.586058 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.586226 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1cf6f98d-ad9d-4c98-8649-5234231be9ad-ovnkube-config\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.586270 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-run-systemd\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.586295 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-cni-netd\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.586316 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-var-lib-openvswitch\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.586340 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-kubelet\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.586367 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-etc-openvswitch\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.586392 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-host-run-ovn-kubernetes\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.586415 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1cf6f98d-ad9d-4c98-8649-5234231be9ad-node-log\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.586488 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1cf6f98d-ad9d-4c98-8649-5234231be9ad-ovnkube-script-lib\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.586547 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1cf6f98d-ad9d-4c98-8649-5234231be9ad-env-overrides\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.590220 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1cf6f98d-ad9d-4c98-8649-5234231be9ad-ovn-node-metrics-cert\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.599099 4869 scope.go:117] "RemoveContainer" containerID="892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.603219 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd94d\" (UniqueName: \"kubernetes.io/projected/1cf6f98d-ad9d-4c98-8649-5234231be9ad-kube-api-access-cd94d\") pod \"ovnkube-node-s96f5\" (UID: \"1cf6f98d-ad9d-4c98-8649-5234231be9ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.611670 4869 scope.go:117] "RemoveContainer" containerID="9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.611997 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f\": container with ID starting with 9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f not found: ID does not exist" containerID="9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.612041 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f"} err="failed to get container status \"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f\": rpc error: code = NotFound desc = could not find container \"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f\": container with ID starting with 9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.612073 4869 scope.go:117] "RemoveContainer" containerID="92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.612430 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21\": container with ID starting with 92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21 not found: ID does not exist" containerID="92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.612461 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21"} err="failed to get container status \"92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21\": rpc error: code = NotFound desc = could not find container \"92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21\": container with ID starting with 92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.612501 4869 scope.go:117] "RemoveContainer" containerID="e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.612711 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\": container with ID starting with e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e not found: ID does not exist" containerID="e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.612734 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e"} err="failed to get container status \"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\": rpc error: code = NotFound desc = could not find container \"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\": container with ID starting with e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.612748 4869 scope.go:117] "RemoveContainer" containerID="54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.612972 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\": container with ID starting with 54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86 not found: ID does not exist" containerID="54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.613000 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86"} err="failed to get container status \"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\": rpc error: code = NotFound desc = could not find container \"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\": container with ID starting with 54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.613016 4869 scope.go:117] "RemoveContainer" containerID="b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.613225 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\": container with ID starting with b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09 not found: ID does not exist" containerID="b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.613249 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09"} err="failed to get container status \"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\": rpc error: code = NotFound desc = could not find container \"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\": container with ID starting with b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.613285 4869 scope.go:117] "RemoveContainer" containerID="3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.613564 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\": container with ID starting with 3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36 not found: ID does not exist" containerID="3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.613596 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36"} err="failed to get container status \"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\": rpc error: code = NotFound desc = could not find container \"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\": container with ID starting with 3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.613617 4869 scope.go:117] "RemoveContainer" containerID="2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.613830 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\": container with ID starting with 2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5 not found: ID does not exist" containerID="2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.613854 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5"} err="failed to get container status \"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\": rpc error: code = NotFound desc = could not find container \"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\": container with ID starting with 2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.613867 4869 scope.go:117] "RemoveContainer" containerID="4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.614323 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\": container with ID starting with 4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8 not found: ID does not exist" containerID="4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.614355 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8"} err="failed to get container status \"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\": rpc error: code = NotFound desc = could not find container \"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\": container with ID starting with 4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.614376 4869 scope.go:117] "RemoveContainer" containerID="6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.614581 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\": container with ID starting with 6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3 not found: ID does not exist" containerID="6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.614605 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3"} err="failed to get container status \"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\": rpc error: code = NotFound desc = could not find container \"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\": container with ID starting with 6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.614619 4869 scope.go:117] "RemoveContainer" containerID="892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6" Jan 30 21:55:32 crc kubenswrapper[4869]: E0130 21:55:32.614836 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\": container with ID starting with 892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6 not found: ID does not exist" containerID="892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.614862 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6"} err="failed to get container status \"892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\": rpc error: code = NotFound desc = could not find container \"892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\": container with ID starting with 892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.614876 4869 scope.go:117] "RemoveContainer" containerID="9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.615120 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f"} err="failed to get container status \"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f\": rpc error: code = NotFound desc = could not find container \"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f\": container with ID starting with 9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.615141 4869 scope.go:117] "RemoveContainer" containerID="92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.615341 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21"} err="failed to get container status \"92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21\": rpc error: code = NotFound desc = could not find container \"92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21\": container with ID starting with 92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.615362 4869 scope.go:117] "RemoveContainer" containerID="e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.615535 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e"} err="failed to get container status \"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\": rpc error: code = NotFound desc = could not find container \"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\": container with ID starting with e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.615553 4869 scope.go:117] "RemoveContainer" containerID="54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.615736 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86"} err="failed to get container status \"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\": rpc error: code = NotFound desc = could not find container \"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\": container with ID starting with 54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.615756 4869 scope.go:117] "RemoveContainer" containerID="b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.616012 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09"} err="failed to get container status \"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\": rpc error: code = NotFound desc = could not find container \"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\": container with ID starting with b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.616033 4869 scope.go:117] "RemoveContainer" containerID="3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.616235 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36"} err="failed to get container status \"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\": rpc error: code = NotFound desc = could not find container \"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\": container with ID starting with 3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.616256 4869 scope.go:117] "RemoveContainer" containerID="2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.616475 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5"} err="failed to get container status \"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\": rpc error: code = NotFound desc = could not find container \"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\": container with ID starting with 2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.616501 4869 scope.go:117] "RemoveContainer" containerID="4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.616688 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8"} err="failed to get container status \"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\": rpc error: code = NotFound desc = could not find container \"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\": container with ID starting with 4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.616705 4869 scope.go:117] "RemoveContainer" containerID="6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.616876 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3"} err="failed to get container status \"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\": rpc error: code = NotFound desc = could not find container \"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\": container with ID starting with 6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.616911 4869 scope.go:117] "RemoveContainer" containerID="892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.617200 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6"} err="failed to get container status \"892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\": rpc error: code = NotFound desc = could not find container \"892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\": container with ID starting with 892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.617220 4869 scope.go:117] "RemoveContainer" containerID="9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.617403 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f"} err="failed to get container status \"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f\": rpc error: code = NotFound desc = could not find container \"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f\": container with ID starting with 9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.617421 4869 scope.go:117] "RemoveContainer" containerID="92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.617590 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21"} err="failed to get container status \"92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21\": rpc error: code = NotFound desc = could not find container \"92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21\": container with ID starting with 92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.617607 4869 scope.go:117] "RemoveContainer" containerID="e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.617781 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e"} err="failed to get container status \"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\": rpc error: code = NotFound desc = could not find container \"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\": container with ID starting with e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.617798 4869 scope.go:117] "RemoveContainer" containerID="54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.617985 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86"} err="failed to get container status \"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\": rpc error: code = NotFound desc = could not find container \"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\": container with ID starting with 54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.618002 4869 scope.go:117] "RemoveContainer" containerID="b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.618164 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09"} err="failed to get container status \"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\": rpc error: code = NotFound desc = could not find container \"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\": container with ID starting with b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.618182 4869 scope.go:117] "RemoveContainer" containerID="3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.618349 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36"} err="failed to get container status \"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\": rpc error: code = NotFound desc = could not find container \"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\": container with ID starting with 3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.618368 4869 scope.go:117] "RemoveContainer" containerID="2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.618526 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5"} err="failed to get container status \"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\": rpc error: code = NotFound desc = could not find container \"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\": container with ID starting with 2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.618543 4869 scope.go:117] "RemoveContainer" containerID="4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.618711 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8"} err="failed to get container status \"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\": rpc error: code = NotFound desc = could not find container \"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\": container with ID starting with 4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.618729 4869 scope.go:117] "RemoveContainer" containerID="6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.618888 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3"} err="failed to get container status \"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\": rpc error: code = NotFound desc = could not find container \"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\": container with ID starting with 6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.618951 4869 scope.go:117] "RemoveContainer" containerID="892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.619105 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6"} err="failed to get container status \"892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\": rpc error: code = NotFound desc = could not find container \"892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\": container with ID starting with 892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.619122 4869 scope.go:117] "RemoveContainer" containerID="9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.619282 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f"} err="failed to get container status \"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f\": rpc error: code = NotFound desc = could not find container \"9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f\": container with ID starting with 9d117969d303cc766951654a0c992d0feda615c5f64221565c8b3a44dc2c114f not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.619298 4869 scope.go:117] "RemoveContainer" containerID="92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.619465 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21"} err="failed to get container status \"92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21\": rpc error: code = NotFound desc = could not find container \"92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21\": container with ID starting with 92b01be743457d0429d6a2015dd3ee61689a3806691b49ca36b1cdf62717cb21 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.619483 4869 scope.go:117] "RemoveContainer" containerID="e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.619662 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e"} err="failed to get container status \"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\": rpc error: code = NotFound desc = could not find container \"e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e\": container with ID starting with e54626b59807507b58d5d5da7fcc0f31ab4a1e48268d3d457b50b8acef88841e not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.619679 4869 scope.go:117] "RemoveContainer" containerID="54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.619853 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86"} err="failed to get container status \"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\": rpc error: code = NotFound desc = could not find container \"54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86\": container with ID starting with 54f0d336700889795f23498e78601820d5764eedca01711b62d1be0a826d2f86 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.619872 4869 scope.go:117] "RemoveContainer" containerID="b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.620050 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09"} err="failed to get container status \"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\": rpc error: code = NotFound desc = could not find container \"b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09\": container with ID starting with b7627fa1ff69874d64267e0ef6a02958d5401a1492eb2b43c3b462327ef49a09 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.620067 4869 scope.go:117] "RemoveContainer" containerID="3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.620243 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36"} err="failed to get container status \"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\": rpc error: code = NotFound desc = could not find container \"3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36\": container with ID starting with 3408f5a8e47d1a65ab80dbe7d137efac515b099a45b4ce0d91f865937d150f36 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.620260 4869 scope.go:117] "RemoveContainer" containerID="2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.620446 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5"} err="failed to get container status \"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\": rpc error: code = NotFound desc = could not find container \"2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5\": container with ID starting with 2b9ca55bc3b8727a297736b2a7d221285316efc8114dac642db2c0b3b72c6ed5 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.620463 4869 scope.go:117] "RemoveContainer" containerID="4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.620634 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8"} err="failed to get container status \"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\": rpc error: code = NotFound desc = could not find container \"4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8\": container with ID starting with 4350fdd3fc32c9493adfd52b3312e11a5a993f1240991eefa5c72f8cd770f3d8 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.620651 4869 scope.go:117] "RemoveContainer" containerID="6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.620812 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3"} err="failed to get container status \"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\": rpc error: code = NotFound desc = could not find container \"6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3\": container with ID starting with 6b39c78b8ab37d4c3c336bdac4f0cd3c411827830ca095fbb6011b6ddeda59f3 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.620828 4869 scope.go:117] "RemoveContainer" containerID="892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.621012 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6"} err="failed to get container status \"892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\": rpc error: code = NotFound desc = could not find container \"892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6\": container with ID starting with 892a838a85ca74337ed9669fce6a3c437c435dc135410d8bea36482138b2a9a6 not found: ID does not exist" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.757247 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.766482 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-stqvf"] Jan 30 21:55:32 crc kubenswrapper[4869]: I0130 21:55:32.774937 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-stqvf"] Jan 30 21:55:32 crc kubenswrapper[4869]: W0130 21:55:32.794620 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cf6f98d_ad9d_4c98_8649_5234231be9ad.slice/crio-87ffa08f0dec4d92fc327551bf3e9b5bce99382b63d6b2b480212bf6873e5cab WatchSource:0}: Error finding container 87ffa08f0dec4d92fc327551bf3e9b5bce99382b63d6b2b480212bf6873e5cab: Status 404 returned error can't find the container with id 87ffa08f0dec4d92fc327551bf3e9b5bce99382b63d6b2b480212bf6873e5cab Jan 30 21:55:33 crc kubenswrapper[4869]: I0130 21:55:33.442434 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tz8jn_dac3c503-e284-4df8-ae5e-0084a884e456/kube-multus/2.log" Jan 30 21:55:33 crc kubenswrapper[4869]: I0130 21:55:33.442538 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tz8jn" event={"ID":"dac3c503-e284-4df8-ae5e-0084a884e456","Type":"ContainerStarted","Data":"ba8054089d924b517ff20dd8980dfe0c7d66a1eac2e1e19d7767ae9b5403c5b1"} Jan 30 21:55:33 crc kubenswrapper[4869]: I0130 21:55:33.443658 4869 generic.go:334] "Generic (PLEG): container finished" podID="1cf6f98d-ad9d-4c98-8649-5234231be9ad" containerID="fcf34df614eb935ea0fe9f15485f4d6cdbd8e16ecafc412efb1205306f2025c5" exitCode=0 Jan 30 21:55:33 crc kubenswrapper[4869]: I0130 21:55:33.443704 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" event={"ID":"1cf6f98d-ad9d-4c98-8649-5234231be9ad","Type":"ContainerDied","Data":"fcf34df614eb935ea0fe9f15485f4d6cdbd8e16ecafc412efb1205306f2025c5"} Jan 30 21:55:33 crc kubenswrapper[4869]: I0130 21:55:33.443719 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" event={"ID":"1cf6f98d-ad9d-4c98-8649-5234231be9ad","Type":"ContainerStarted","Data":"87ffa08f0dec4d92fc327551bf3e9b5bce99382b63d6b2b480212bf6873e5cab"} Jan 30 21:55:33 crc kubenswrapper[4869]: I0130 21:55:33.883789 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c39d4fe5-06cd-4ea4-8336-bd481332c475" path="/var/lib/kubelet/pods/c39d4fe5-06cd-4ea4-8336-bd481332c475/volumes" Jan 30 21:55:34 crc kubenswrapper[4869]: I0130 21:55:34.454854 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" event={"ID":"1cf6f98d-ad9d-4c98-8649-5234231be9ad","Type":"ContainerStarted","Data":"bc4b5c5041a2fb826dddac89c812f8fda6eddd5aec68c23082d4105600dbd0b6"} Jan 30 21:55:34 crc kubenswrapper[4869]: I0130 21:55:34.455202 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" event={"ID":"1cf6f98d-ad9d-4c98-8649-5234231be9ad","Type":"ContainerStarted","Data":"378bce9809792f6842682e26f5de2592d91996daefb616071f69e1fd5bdecd74"} Jan 30 21:55:34 crc kubenswrapper[4869]: I0130 21:55:34.455223 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" event={"ID":"1cf6f98d-ad9d-4c98-8649-5234231be9ad","Type":"ContainerStarted","Data":"08cfbbf53dd0571a76e6a28ce8af475698746b857657a5d8711032feb5ecbd2b"} Jan 30 21:55:34 crc kubenswrapper[4869]: I0130 21:55:34.455240 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" event={"ID":"1cf6f98d-ad9d-4c98-8649-5234231be9ad","Type":"ContainerStarted","Data":"35b19ce605bc2e53f35c0ec223a6b9fe26c483a13fa1e6f43f81116464326ca9"} Jan 30 21:55:34 crc kubenswrapper[4869]: I0130 21:55:34.455251 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" event={"ID":"1cf6f98d-ad9d-4c98-8649-5234231be9ad","Type":"ContainerStarted","Data":"87d18c1a9904a5eb191195b0c1907d553132bf049c2807e719eda6f354f5d73c"} Jan 30 21:55:34 crc kubenswrapper[4869]: I0130 21:55:34.455264 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" event={"ID":"1cf6f98d-ad9d-4c98-8649-5234231be9ad","Type":"ContainerStarted","Data":"27ea2dc1c4754fba165609234fd4534f45f9e93db9ce4b532b7678c15639ff52"} Jan 30 21:55:36 crc kubenswrapper[4869]: I0130 21:55:36.483315 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" event={"ID":"1cf6f98d-ad9d-4c98-8649-5234231be9ad","Type":"ContainerStarted","Data":"9ec6dbe1d6a99df1e9c34f113150ad4bccb14abefc4eac8f5e2813be81675dac"} Jan 30 21:55:39 crc kubenswrapper[4869]: I0130 21:55:39.502877 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" event={"ID":"1cf6f98d-ad9d-4c98-8649-5234231be9ad","Type":"ContainerStarted","Data":"a047d7862aeff9c2f3c6108d32993d0b5a35ed8bd53ec43d236ff5cdc7856b65"} Jan 30 21:55:39 crc kubenswrapper[4869]: I0130 21:55:39.529516 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" podStartSLOduration=7.529499516 podStartE2EDuration="7.529499516s" podCreationTimestamp="2026-01-30 21:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:55:39.526701299 +0000 UTC m=+740.412459524" watchObservedRunningTime="2026-01-30 21:55:39.529499516 +0000 UTC m=+740.415257541" Jan 30 21:55:40 crc kubenswrapper[4869]: I0130 21:55:40.507854 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:40 crc kubenswrapper[4869]: I0130 21:55:40.508253 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:40 crc kubenswrapper[4869]: I0130 21:55:40.508264 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:40 crc kubenswrapper[4869]: I0130 21:55:40.580794 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:40 crc kubenswrapper[4869]: I0130 21:55:40.580867 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:55:55 crc kubenswrapper[4869]: I0130 21:55:55.137159 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244"] Jan 30 21:55:55 crc kubenswrapper[4869]: I0130 21:55:55.139457 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244" Jan 30 21:55:55 crc kubenswrapper[4869]: I0130 21:55:55.142295 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 21:55:55 crc kubenswrapper[4869]: I0130 21:55:55.150310 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244"] Jan 30 21:55:55 crc kubenswrapper[4869]: I0130 21:55:55.279314 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/565c0cfa-b127-4da5-a3be-d660b5224997-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244\" (UID: \"565c0cfa-b127-4da5-a3be-d660b5224997\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244" Jan 30 21:55:55 crc kubenswrapper[4869]: I0130 21:55:55.279381 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jqvd\" (UniqueName: \"kubernetes.io/projected/565c0cfa-b127-4da5-a3be-d660b5224997-kube-api-access-4jqvd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244\" (UID: \"565c0cfa-b127-4da5-a3be-d660b5224997\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244" Jan 30 21:55:55 crc kubenswrapper[4869]: I0130 21:55:55.279413 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/565c0cfa-b127-4da5-a3be-d660b5224997-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244\" (UID: \"565c0cfa-b127-4da5-a3be-d660b5224997\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244" Jan 30 21:55:55 crc kubenswrapper[4869]: I0130 21:55:55.381078 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/565c0cfa-b127-4da5-a3be-d660b5224997-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244\" (UID: \"565c0cfa-b127-4da5-a3be-d660b5224997\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244" Jan 30 21:55:55 crc kubenswrapper[4869]: I0130 21:55:55.381412 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/565c0cfa-b127-4da5-a3be-d660b5224997-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244\" (UID: \"565c0cfa-b127-4da5-a3be-d660b5224997\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244" Jan 30 21:55:55 crc kubenswrapper[4869]: I0130 21:55:55.381549 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jqvd\" (UniqueName: \"kubernetes.io/projected/565c0cfa-b127-4da5-a3be-d660b5224997-kube-api-access-4jqvd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244\" (UID: \"565c0cfa-b127-4da5-a3be-d660b5224997\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244" Jan 30 21:55:55 crc kubenswrapper[4869]: I0130 21:55:55.381785 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/565c0cfa-b127-4da5-a3be-d660b5224997-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244\" (UID: \"565c0cfa-b127-4da5-a3be-d660b5224997\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244" Jan 30 21:55:55 crc kubenswrapper[4869]: I0130 21:55:55.382099 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/565c0cfa-b127-4da5-a3be-d660b5224997-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244\" (UID: \"565c0cfa-b127-4da5-a3be-d660b5224997\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244" Jan 30 21:55:55 crc kubenswrapper[4869]: I0130 21:55:55.406043 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jqvd\" (UniqueName: \"kubernetes.io/projected/565c0cfa-b127-4da5-a3be-d660b5224997-kube-api-access-4jqvd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244\" (UID: \"565c0cfa-b127-4da5-a3be-d660b5224997\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244" Jan 30 21:55:55 crc kubenswrapper[4869]: I0130 21:55:55.463934 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244" Jan 30 21:55:55 crc kubenswrapper[4869]: I0130 21:55:55.704736 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244"] Jan 30 21:55:55 crc kubenswrapper[4869]: W0130 21:55:55.718047 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod565c0cfa_b127_4da5_a3be_d660b5224997.slice/crio-bebbbe5db2dde85c359feaf06b9300b4633050358de409f252c42f20e070dc98 WatchSource:0}: Error finding container bebbbe5db2dde85c359feaf06b9300b4633050358de409f252c42f20e070dc98: Status 404 returned error can't find the container with id bebbbe5db2dde85c359feaf06b9300b4633050358de409f252c42f20e070dc98 Jan 30 21:55:56 crc kubenswrapper[4869]: I0130 21:55:56.603657 4869 generic.go:334] "Generic (PLEG): container finished" podID="565c0cfa-b127-4da5-a3be-d660b5224997" containerID="3e0c17a35de61eb47fd0baae62ff8cb869bdedcaf34219d5c798c100bcaa2d69" exitCode=0 Jan 30 21:55:56 crc kubenswrapper[4869]: I0130 21:55:56.603706 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244" event={"ID":"565c0cfa-b127-4da5-a3be-d660b5224997","Type":"ContainerDied","Data":"3e0c17a35de61eb47fd0baae62ff8cb869bdedcaf34219d5c798c100bcaa2d69"} Jan 30 21:55:56 crc kubenswrapper[4869]: I0130 21:55:56.603731 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244" event={"ID":"565c0cfa-b127-4da5-a3be-d660b5224997","Type":"ContainerStarted","Data":"bebbbe5db2dde85c359feaf06b9300b4633050358de409f252c42f20e070dc98"} Jan 30 21:55:56 crc kubenswrapper[4869]: I0130 21:55:56.605708 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:55:58 crc kubenswrapper[4869]: I0130 21:55:58.612927 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244" event={"ID":"565c0cfa-b127-4da5-a3be-d660b5224997","Type":"ContainerDied","Data":"c01478123141b1c2283404f4eab5f8d73534890b456ee4ae0f0e872211c57e29"} Jan 30 21:55:58 crc kubenswrapper[4869]: I0130 21:55:58.612880 4869 generic.go:334] "Generic (PLEG): container finished" podID="565c0cfa-b127-4da5-a3be-d660b5224997" containerID="c01478123141b1c2283404f4eab5f8d73534890b456ee4ae0f0e872211c57e29" exitCode=0 Jan 30 21:55:59 crc kubenswrapper[4869]: I0130 21:55:59.621358 4869 generic.go:334] "Generic (PLEG): container finished" podID="565c0cfa-b127-4da5-a3be-d660b5224997" containerID="2745632dfa0c42f7762d70f62516abe801a1b9147ef9dafdd98efab6199b518b" exitCode=0 Jan 30 21:55:59 crc kubenswrapper[4869]: I0130 21:55:59.621452 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244" event={"ID":"565c0cfa-b127-4da5-a3be-d660b5224997","Type":"ContainerDied","Data":"2745632dfa0c42f7762d70f62516abe801a1b9147ef9dafdd98efab6199b518b"} Jan 30 21:56:00 crc kubenswrapper[4869]: I0130 21:56:00.840435 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244" Jan 30 21:56:00 crc kubenswrapper[4869]: I0130 21:56:00.955046 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/565c0cfa-b127-4da5-a3be-d660b5224997-util\") pod \"565c0cfa-b127-4da5-a3be-d660b5224997\" (UID: \"565c0cfa-b127-4da5-a3be-d660b5224997\") " Jan 30 21:56:00 crc kubenswrapper[4869]: I0130 21:56:00.955375 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/565c0cfa-b127-4da5-a3be-d660b5224997-bundle\") pod \"565c0cfa-b127-4da5-a3be-d660b5224997\" (UID: \"565c0cfa-b127-4da5-a3be-d660b5224997\") " Jan 30 21:56:00 crc kubenswrapper[4869]: I0130 21:56:00.955497 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jqvd\" (UniqueName: \"kubernetes.io/projected/565c0cfa-b127-4da5-a3be-d660b5224997-kube-api-access-4jqvd\") pod \"565c0cfa-b127-4da5-a3be-d660b5224997\" (UID: \"565c0cfa-b127-4da5-a3be-d660b5224997\") " Jan 30 21:56:00 crc kubenswrapper[4869]: I0130 21:56:00.957018 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/565c0cfa-b127-4da5-a3be-d660b5224997-bundle" (OuterVolumeSpecName: "bundle") pod "565c0cfa-b127-4da5-a3be-d660b5224997" (UID: "565c0cfa-b127-4da5-a3be-d660b5224997"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:56:00 crc kubenswrapper[4869]: I0130 21:56:00.964068 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/565c0cfa-b127-4da5-a3be-d660b5224997-kube-api-access-4jqvd" (OuterVolumeSpecName: "kube-api-access-4jqvd") pod "565c0cfa-b127-4da5-a3be-d660b5224997" (UID: "565c0cfa-b127-4da5-a3be-d660b5224997"). InnerVolumeSpecName "kube-api-access-4jqvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:56:00 crc kubenswrapper[4869]: I0130 21:56:00.973525 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/565c0cfa-b127-4da5-a3be-d660b5224997-util" (OuterVolumeSpecName: "util") pod "565c0cfa-b127-4da5-a3be-d660b5224997" (UID: "565c0cfa-b127-4da5-a3be-d660b5224997"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:56:01 crc kubenswrapper[4869]: I0130 21:56:01.057904 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/565c0cfa-b127-4da5-a3be-d660b5224997-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:01 crc kubenswrapper[4869]: I0130 21:56:01.057964 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/565c0cfa-b127-4da5-a3be-d660b5224997-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:01 crc kubenswrapper[4869]: I0130 21:56:01.057984 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jqvd\" (UniqueName: \"kubernetes.io/projected/565c0cfa-b127-4da5-a3be-d660b5224997-kube-api-access-4jqvd\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:01 crc kubenswrapper[4869]: I0130 21:56:01.631884 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244" event={"ID":"565c0cfa-b127-4da5-a3be-d660b5224997","Type":"ContainerDied","Data":"bebbbe5db2dde85c359feaf06b9300b4633050358de409f252c42f20e070dc98"} Jan 30 21:56:01 crc kubenswrapper[4869]: I0130 21:56:01.631942 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244" Jan 30 21:56:01 crc kubenswrapper[4869]: I0130 21:56:01.631954 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bebbbe5db2dde85c359feaf06b9300b4633050358de409f252c42f20e070dc98" Jan 30 21:56:02 crc kubenswrapper[4869]: I0130 21:56:02.052764 4869 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 21:56:02 crc kubenswrapper[4869]: I0130 21:56:02.780101 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-s96f5" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.024951 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd"] Jan 30 21:56:10 crc kubenswrapper[4869]: E0130 21:56:10.025744 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565c0cfa-b127-4da5-a3be-d660b5224997" containerName="util" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.025757 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="565c0cfa-b127-4da5-a3be-d660b5224997" containerName="util" Jan 30 21:56:10 crc kubenswrapper[4869]: E0130 21:56:10.025770 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565c0cfa-b127-4da5-a3be-d660b5224997" containerName="pull" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.025777 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="565c0cfa-b127-4da5-a3be-d660b5224997" containerName="pull" Jan 30 21:56:10 crc kubenswrapper[4869]: E0130 21:56:10.025800 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565c0cfa-b127-4da5-a3be-d660b5224997" containerName="extract" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.025807 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="565c0cfa-b127-4da5-a3be-d660b5224997" containerName="extract" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.025934 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="565c0cfa-b127-4da5-a3be-d660b5224997" containerName="extract" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.026377 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.031046 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.031164 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.031207 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-q8rzm" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.031204 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.031724 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.050818 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd"] Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.069003 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/74d47ca3-77d5-40b2-bf78-e6434e094b98-webhook-cert\") pod \"metallb-operator-controller-manager-c784b4f9f-nctpd\" (UID: \"74d47ca3-77d5-40b2-bf78-e6434e094b98\") " pod="metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.069077 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/74d47ca3-77d5-40b2-bf78-e6434e094b98-apiservice-cert\") pod \"metallb-operator-controller-manager-c784b4f9f-nctpd\" (UID: \"74d47ca3-77d5-40b2-bf78-e6434e094b98\") " pod="metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.069105 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4stl7\" (UniqueName: \"kubernetes.io/projected/74d47ca3-77d5-40b2-bf78-e6434e094b98-kube-api-access-4stl7\") pod \"metallb-operator-controller-manager-c784b4f9f-nctpd\" (UID: \"74d47ca3-77d5-40b2-bf78-e6434e094b98\") " pod="metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.170225 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/74d47ca3-77d5-40b2-bf78-e6434e094b98-webhook-cert\") pod \"metallb-operator-controller-manager-c784b4f9f-nctpd\" (UID: \"74d47ca3-77d5-40b2-bf78-e6434e094b98\") " pod="metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.170292 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/74d47ca3-77d5-40b2-bf78-e6434e094b98-apiservice-cert\") pod \"metallb-operator-controller-manager-c784b4f9f-nctpd\" (UID: \"74d47ca3-77d5-40b2-bf78-e6434e094b98\") " pod="metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.170310 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4stl7\" (UniqueName: \"kubernetes.io/projected/74d47ca3-77d5-40b2-bf78-e6434e094b98-kube-api-access-4stl7\") pod \"metallb-operator-controller-manager-c784b4f9f-nctpd\" (UID: \"74d47ca3-77d5-40b2-bf78-e6434e094b98\") " pod="metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.180853 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/74d47ca3-77d5-40b2-bf78-e6434e094b98-webhook-cert\") pod \"metallb-operator-controller-manager-c784b4f9f-nctpd\" (UID: \"74d47ca3-77d5-40b2-bf78-e6434e094b98\") " pod="metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.180881 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/74d47ca3-77d5-40b2-bf78-e6434e094b98-apiservice-cert\") pod \"metallb-operator-controller-manager-c784b4f9f-nctpd\" (UID: \"74d47ca3-77d5-40b2-bf78-e6434e094b98\") " pod="metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.189424 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4stl7\" (UniqueName: \"kubernetes.io/projected/74d47ca3-77d5-40b2-bf78-e6434e094b98-kube-api-access-4stl7\") pod \"metallb-operator-controller-manager-c784b4f9f-nctpd\" (UID: \"74d47ca3-77d5-40b2-bf78-e6434e094b98\") " pod="metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.264413 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd"] Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.265100 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.267080 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-kd5vj" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.267249 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.267564 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.283755 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd"] Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.342193 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.372458 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55qcq\" (UniqueName: \"kubernetes.io/projected/b8abd45b-148e-4550-9f42-7ebb36bc52a3-kube-api-access-55qcq\") pod \"metallb-operator-webhook-server-7866d54458-pq5sd\" (UID: \"b8abd45b-148e-4550-9f42-7ebb36bc52a3\") " pod="metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.372536 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b8abd45b-148e-4550-9f42-7ebb36bc52a3-apiservice-cert\") pod \"metallb-operator-webhook-server-7866d54458-pq5sd\" (UID: \"b8abd45b-148e-4550-9f42-7ebb36bc52a3\") " pod="metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.372554 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8abd45b-148e-4550-9f42-7ebb36bc52a3-webhook-cert\") pod \"metallb-operator-webhook-server-7866d54458-pq5sd\" (UID: \"b8abd45b-148e-4550-9f42-7ebb36bc52a3\") " pod="metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.475816 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b8abd45b-148e-4550-9f42-7ebb36bc52a3-apiservice-cert\") pod \"metallb-operator-webhook-server-7866d54458-pq5sd\" (UID: \"b8abd45b-148e-4550-9f42-7ebb36bc52a3\") " pod="metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.476106 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8abd45b-148e-4550-9f42-7ebb36bc52a3-webhook-cert\") pod \"metallb-operator-webhook-server-7866d54458-pq5sd\" (UID: \"b8abd45b-148e-4550-9f42-7ebb36bc52a3\") " pod="metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.476161 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55qcq\" (UniqueName: \"kubernetes.io/projected/b8abd45b-148e-4550-9f42-7ebb36bc52a3-kube-api-access-55qcq\") pod \"metallb-operator-webhook-server-7866d54458-pq5sd\" (UID: \"b8abd45b-148e-4550-9f42-7ebb36bc52a3\") " pod="metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.481115 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b8abd45b-148e-4550-9f42-7ebb36bc52a3-apiservice-cert\") pod \"metallb-operator-webhook-server-7866d54458-pq5sd\" (UID: \"b8abd45b-148e-4550-9f42-7ebb36bc52a3\") " pod="metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.493515 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8abd45b-148e-4550-9f42-7ebb36bc52a3-webhook-cert\") pod \"metallb-operator-webhook-server-7866d54458-pq5sd\" (UID: \"b8abd45b-148e-4550-9f42-7ebb36bc52a3\") " pod="metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.504113 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55qcq\" (UniqueName: \"kubernetes.io/projected/b8abd45b-148e-4550-9f42-7ebb36bc52a3-kube-api-access-55qcq\") pod \"metallb-operator-webhook-server-7866d54458-pq5sd\" (UID: \"b8abd45b-148e-4550-9f42-7ebb36bc52a3\") " pod="metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.583405 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd" Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.817444 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd"] Jan 30 21:56:10 crc kubenswrapper[4869]: W0130 21:56:10.828205 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8abd45b_148e_4550_9f42_7ebb36bc52a3.slice/crio-6747c23e3e8cf30221c4006b933fa11e76ea45e9ac4986200f7aed353e36e26e WatchSource:0}: Error finding container 6747c23e3e8cf30221c4006b933fa11e76ea45e9ac4986200f7aed353e36e26e: Status 404 returned error can't find the container with id 6747c23e3e8cf30221c4006b933fa11e76ea45e9ac4986200f7aed353e36e26e Jan 30 21:56:10 crc kubenswrapper[4869]: I0130 21:56:10.859459 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd"] Jan 30 21:56:10 crc kubenswrapper[4869]: W0130 21:56:10.863342 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74d47ca3_77d5_40b2_bf78_e6434e094b98.slice/crio-ecc905655995df3add2a63b5baf090ec3cc1edfd607e1fec7bb5fd774e60f194 WatchSource:0}: Error finding container ecc905655995df3add2a63b5baf090ec3cc1edfd607e1fec7bb5fd774e60f194: Status 404 returned error can't find the container with id ecc905655995df3add2a63b5baf090ec3cc1edfd607e1fec7bb5fd774e60f194 Jan 30 21:56:11 crc kubenswrapper[4869]: I0130 21:56:11.691489 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd" event={"ID":"74d47ca3-77d5-40b2-bf78-e6434e094b98","Type":"ContainerStarted","Data":"ecc905655995df3add2a63b5baf090ec3cc1edfd607e1fec7bb5fd774e60f194"} Jan 30 21:56:11 crc kubenswrapper[4869]: I0130 21:56:11.692817 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd" event={"ID":"b8abd45b-148e-4550-9f42-7ebb36bc52a3","Type":"ContainerStarted","Data":"6747c23e3e8cf30221c4006b933fa11e76ea45e9ac4986200f7aed353e36e26e"} Jan 30 21:56:18 crc kubenswrapper[4869]: I0130 21:56:18.751059 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd" event={"ID":"74d47ca3-77d5-40b2-bf78-e6434e094b98","Type":"ContainerStarted","Data":"3a24eacb8499e6a5a7489107dd4bbbf88de42471dae395540e053399aa6b9461"} Jan 30 21:56:18 crc kubenswrapper[4869]: I0130 21:56:18.752056 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd" Jan 30 21:56:18 crc kubenswrapper[4869]: I0130 21:56:18.752794 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd" event={"ID":"b8abd45b-148e-4550-9f42-7ebb36bc52a3","Type":"ContainerStarted","Data":"9d929601c5f14dfa24f512bfbacc5c62e1c681a350adfd2857af53e0d8d15703"} Jan 30 21:56:18 crc kubenswrapper[4869]: I0130 21:56:18.771849 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd" podStartSLOduration=1.782003408 podStartE2EDuration="8.771833272s" podCreationTimestamp="2026-01-30 21:56:10 +0000 UTC" firstStartedPulling="2026-01-30 21:56:10.865959861 +0000 UTC m=+771.751717886" lastFinishedPulling="2026-01-30 21:56:17.855789725 +0000 UTC m=+778.741547750" observedRunningTime="2026-01-30 21:56:18.769467828 +0000 UTC m=+779.655225863" watchObservedRunningTime="2026-01-30 21:56:18.771833272 +0000 UTC m=+779.657591297" Jan 30 21:56:18 crc kubenswrapper[4869]: I0130 21:56:18.790287 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd" podStartSLOduration=1.7560816780000001 podStartE2EDuration="8.790260069s" podCreationTimestamp="2026-01-30 21:56:10 +0000 UTC" firstStartedPulling="2026-01-30 21:56:10.830541443 +0000 UTC m=+771.716299468" lastFinishedPulling="2026-01-30 21:56:17.864719834 +0000 UTC m=+778.750477859" observedRunningTime="2026-01-30 21:56:18.787461541 +0000 UTC m=+779.673219576" watchObservedRunningTime="2026-01-30 21:56:18.790260069 +0000 UTC m=+779.676018114" Jan 30 21:56:19 crc kubenswrapper[4869]: I0130 21:56:19.757533 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd" Jan 30 21:56:30 crc kubenswrapper[4869]: I0130 21:56:30.595373 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7866d54458-pq5sd" Jan 30 21:56:31 crc kubenswrapper[4869]: I0130 21:56:31.990427 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:56:31 crc kubenswrapper[4869]: I0130 21:56:31.991054 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:56:50 crc kubenswrapper[4869]: I0130 21:56:50.344757 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-c784b4f9f-nctpd" Jan 30 21:56:50 crc kubenswrapper[4869]: I0130 21:56:50.963537 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-pdqbz"] Jan 30 21:56:50 crc kubenswrapper[4869]: I0130 21:56:50.964336 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pdqbz" Jan 30 21:56:50 crc kubenswrapper[4869]: I0130 21:56:50.968364 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-jj598" Jan 30 21:56:50 crc kubenswrapper[4869]: I0130 21:56:50.968756 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 21:56:50 crc kubenswrapper[4869]: I0130 21:56:50.969104 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-qrwjb"] Jan 30 21:56:50 crc kubenswrapper[4869]: I0130 21:56:50.971010 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:50 crc kubenswrapper[4869]: I0130 21:56:50.973997 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 21:56:50 crc kubenswrapper[4869]: I0130 21:56:50.978145 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-pdqbz"] Jan 30 21:56:50 crc kubenswrapper[4869]: I0130 21:56:50.980136 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.047782 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-2fslp"] Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.048744 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2fslp" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.050752 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.050820 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-f6bd6" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.053382 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.054263 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.073345 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-kjnqq"] Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.074312 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-kjnqq" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.075360 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnh5h\" (UniqueName: \"kubernetes.io/projected/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-kube-api-access-bnh5h\") pod \"speaker-2fslp\" (UID: \"bb7f0287-c0d2-4a75-b392-5d143f6a9eb6\") " pod="metallb-system/speaker-2fslp" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.075400 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0d120a71-eb76-4e21-bd06-a646961dbebc-frr-startup\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.075425 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-metrics-certs\") pod \"speaker-2fslp\" (UID: \"bb7f0287-c0d2-4a75-b392-5d143f6a9eb6\") " pod="metallb-system/speaker-2fslp" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.075449 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-metallb-excludel2\") pod \"speaker-2fslp\" (UID: \"bb7f0287-c0d2-4a75-b392-5d143f6a9eb6\") " pod="metallb-system/speaker-2fslp" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.075464 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0d120a71-eb76-4e21-bd06-a646961dbebc-frr-conf\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.075485 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0d120a71-eb76-4e21-bd06-a646961dbebc-reloader\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.075555 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-memberlist\") pod \"speaker-2fslp\" (UID: \"bb7f0287-c0d2-4a75-b392-5d143f6a9eb6\") " pod="metallb-system/speaker-2fslp" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.075576 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d120a71-eb76-4e21-bd06-a646961dbebc-metrics-certs\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.075595 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0d120a71-eb76-4e21-bd06-a646961dbebc-metrics\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.075612 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79ljh\" (UniqueName: \"kubernetes.io/projected/0d120a71-eb76-4e21-bd06-a646961dbebc-kube-api-access-79ljh\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.075630 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8df09342-d1b4-46c8-b073-756f9c26e15b-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-pdqbz\" (UID: \"8df09342-d1b4-46c8-b073-756f9c26e15b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pdqbz" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.075686 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0d120a71-eb76-4e21-bd06-a646961dbebc-frr-sockets\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.075775 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9qwc\" (UniqueName: \"kubernetes.io/projected/8df09342-d1b4-46c8-b073-756f9c26e15b-kube-api-access-x9qwc\") pod \"frr-k8s-webhook-server-7df86c4f6c-pdqbz\" (UID: \"8df09342-d1b4-46c8-b073-756f9c26e15b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pdqbz" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.076313 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.092072 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-kjnqq"] Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.176938 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-memberlist\") pod \"speaker-2fslp\" (UID: \"bb7f0287-c0d2-4a75-b392-5d143f6a9eb6\") " pod="metallb-system/speaker-2fslp" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.176993 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d120a71-eb76-4e21-bd06-a646961dbebc-metrics-certs\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.177015 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0d120a71-eb76-4e21-bd06-a646961dbebc-metrics\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.177036 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79ljh\" (UniqueName: \"kubernetes.io/projected/0d120a71-eb76-4e21-bd06-a646961dbebc-kube-api-access-79ljh\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.177075 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8df09342-d1b4-46c8-b073-756f9c26e15b-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-pdqbz\" (UID: \"8df09342-d1b4-46c8-b073-756f9c26e15b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pdqbz" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.177093 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48ddd270-2e1a-4924-905c-89327f9fd1f4-metrics-certs\") pod \"controller-6968d8fdc4-kjnqq\" (UID: \"48ddd270-2e1a-4924-905c-89327f9fd1f4\") " pod="metallb-system/controller-6968d8fdc4-kjnqq" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.177114 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48ddd270-2e1a-4924-905c-89327f9fd1f4-cert\") pod \"controller-6968d8fdc4-kjnqq\" (UID: \"48ddd270-2e1a-4924-905c-89327f9fd1f4\") " pod="metallb-system/controller-6968d8fdc4-kjnqq" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.177137 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t54ht\" (UniqueName: \"kubernetes.io/projected/48ddd270-2e1a-4924-905c-89327f9fd1f4-kube-api-access-t54ht\") pod \"controller-6968d8fdc4-kjnqq\" (UID: \"48ddd270-2e1a-4924-905c-89327f9fd1f4\") " pod="metallb-system/controller-6968d8fdc4-kjnqq" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.177157 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0d120a71-eb76-4e21-bd06-a646961dbebc-frr-sockets\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.177175 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9qwc\" (UniqueName: \"kubernetes.io/projected/8df09342-d1b4-46c8-b073-756f9c26e15b-kube-api-access-x9qwc\") pod \"frr-k8s-webhook-server-7df86c4f6c-pdqbz\" (UID: \"8df09342-d1b4-46c8-b073-756f9c26e15b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pdqbz" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.177190 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnh5h\" (UniqueName: \"kubernetes.io/projected/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-kube-api-access-bnh5h\") pod \"speaker-2fslp\" (UID: \"bb7f0287-c0d2-4a75-b392-5d143f6a9eb6\") " pod="metallb-system/speaker-2fslp" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.177221 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0d120a71-eb76-4e21-bd06-a646961dbebc-frr-startup\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.177248 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-metrics-certs\") pod \"speaker-2fslp\" (UID: \"bb7f0287-c0d2-4a75-b392-5d143f6a9eb6\") " pod="metallb-system/speaker-2fslp" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.177271 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-metallb-excludel2\") pod \"speaker-2fslp\" (UID: \"bb7f0287-c0d2-4a75-b392-5d143f6a9eb6\") " pod="metallb-system/speaker-2fslp" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.177285 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0d120a71-eb76-4e21-bd06-a646961dbebc-frr-conf\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.177303 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0d120a71-eb76-4e21-bd06-a646961dbebc-reloader\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.177996 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0d120a71-eb76-4e21-bd06-a646961dbebc-reloader\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.178716 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0d120a71-eb76-4e21-bd06-a646961dbebc-frr-conf\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: E0130 21:56:51.178013 4869 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.178926 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0d120a71-eb76-4e21-bd06-a646961dbebc-metrics\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: E0130 21:56:51.178852 4869 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 30 21:56:51 crc kubenswrapper[4869]: E0130 21:56:51.179082 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-memberlist podName:bb7f0287-c0d2-4a75-b392-5d143f6a9eb6 nodeName:}" failed. No retries permitted until 2026-01-30 21:56:51.678944622 +0000 UTC m=+812.564702647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-memberlist") pod "speaker-2fslp" (UID: "bb7f0287-c0d2-4a75-b392-5d143f6a9eb6") : secret "metallb-memberlist" not found Jan 30 21:56:51 crc kubenswrapper[4869]: E0130 21:56:51.179190 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-metrics-certs podName:bb7f0287-c0d2-4a75-b392-5d143f6a9eb6 nodeName:}" failed. No retries permitted until 2026-01-30 21:56:51.679177248 +0000 UTC m=+812.564935343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-metrics-certs") pod "speaker-2fslp" (UID: "bb7f0287-c0d2-4a75-b392-5d143f6a9eb6") : secret "speaker-certs-secret" not found Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.179133 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-metallb-excludel2\") pod \"speaker-2fslp\" (UID: \"bb7f0287-c0d2-4a75-b392-5d143f6a9eb6\") " pod="metallb-system/speaker-2fslp" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.179087 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0d120a71-eb76-4e21-bd06-a646961dbebc-frr-sockets\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.179620 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0d120a71-eb76-4e21-bd06-a646961dbebc-frr-startup\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.185581 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8df09342-d1b4-46c8-b073-756f9c26e15b-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-pdqbz\" (UID: \"8df09342-d1b4-46c8-b073-756f9c26e15b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pdqbz" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.185627 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d120a71-eb76-4e21-bd06-a646961dbebc-metrics-certs\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.195975 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79ljh\" (UniqueName: \"kubernetes.io/projected/0d120a71-eb76-4e21-bd06-a646961dbebc-kube-api-access-79ljh\") pod \"frr-k8s-qrwjb\" (UID: \"0d120a71-eb76-4e21-bd06-a646961dbebc\") " pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.200732 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9qwc\" (UniqueName: \"kubernetes.io/projected/8df09342-d1b4-46c8-b073-756f9c26e15b-kube-api-access-x9qwc\") pod \"frr-k8s-webhook-server-7df86c4f6c-pdqbz\" (UID: \"8df09342-d1b4-46c8-b073-756f9c26e15b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pdqbz" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.201310 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnh5h\" (UniqueName: \"kubernetes.io/projected/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-kube-api-access-bnh5h\") pod \"speaker-2fslp\" (UID: \"bb7f0287-c0d2-4a75-b392-5d143f6a9eb6\") " pod="metallb-system/speaker-2fslp" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.278871 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48ddd270-2e1a-4924-905c-89327f9fd1f4-metrics-certs\") pod \"controller-6968d8fdc4-kjnqq\" (UID: \"48ddd270-2e1a-4924-905c-89327f9fd1f4\") " pod="metallb-system/controller-6968d8fdc4-kjnqq" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.278948 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48ddd270-2e1a-4924-905c-89327f9fd1f4-cert\") pod \"controller-6968d8fdc4-kjnqq\" (UID: \"48ddd270-2e1a-4924-905c-89327f9fd1f4\") " pod="metallb-system/controller-6968d8fdc4-kjnqq" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.278990 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t54ht\" (UniqueName: \"kubernetes.io/projected/48ddd270-2e1a-4924-905c-89327f9fd1f4-kube-api-access-t54ht\") pod \"controller-6968d8fdc4-kjnqq\" (UID: \"48ddd270-2e1a-4924-905c-89327f9fd1f4\") " pod="metallb-system/controller-6968d8fdc4-kjnqq" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.281314 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.283317 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pdqbz" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.283929 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48ddd270-2e1a-4924-905c-89327f9fd1f4-metrics-certs\") pod \"controller-6968d8fdc4-kjnqq\" (UID: \"48ddd270-2e1a-4924-905c-89327f9fd1f4\") " pod="metallb-system/controller-6968d8fdc4-kjnqq" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.291688 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.296576 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48ddd270-2e1a-4924-905c-89327f9fd1f4-cert\") pod \"controller-6968d8fdc4-kjnqq\" (UID: \"48ddd270-2e1a-4924-905c-89327f9fd1f4\") " pod="metallb-system/controller-6968d8fdc4-kjnqq" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.301964 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t54ht\" (UniqueName: \"kubernetes.io/projected/48ddd270-2e1a-4924-905c-89327f9fd1f4-kube-api-access-t54ht\") pod \"controller-6968d8fdc4-kjnqq\" (UID: \"48ddd270-2e1a-4924-905c-89327f9fd1f4\") " pod="metallb-system/controller-6968d8fdc4-kjnqq" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.390892 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-kjnqq" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.474573 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-pdqbz"] Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.586477 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-kjnqq"] Jan 30 21:56:51 crc kubenswrapper[4869]: W0130 21:56:51.590133 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48ddd270_2e1a_4924_905c_89327f9fd1f4.slice/crio-8380577d95911675c19a345ef5a51bec9b9064cd57aa6844059c4c1dbba9501f WatchSource:0}: Error finding container 8380577d95911675c19a345ef5a51bec9b9064cd57aa6844059c4c1dbba9501f: Status 404 returned error can't find the container with id 8380577d95911675c19a345ef5a51bec9b9064cd57aa6844059c4c1dbba9501f Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.686335 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-memberlist\") pod \"speaker-2fslp\" (UID: \"bb7f0287-c0d2-4a75-b392-5d143f6a9eb6\") " pod="metallb-system/speaker-2fslp" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.686470 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-metrics-certs\") pod \"speaker-2fslp\" (UID: \"bb7f0287-c0d2-4a75-b392-5d143f6a9eb6\") " pod="metallb-system/speaker-2fslp" Jan 30 21:56:51 crc kubenswrapper[4869]: E0130 21:56:51.687019 4869 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 21:56:51 crc kubenswrapper[4869]: E0130 21:56:51.687078 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-memberlist podName:bb7f0287-c0d2-4a75-b392-5d143f6a9eb6 nodeName:}" failed. No retries permitted until 2026-01-30 21:56:52.687062767 +0000 UTC m=+813.572820782 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-memberlist") pod "speaker-2fslp" (UID: "bb7f0287-c0d2-4a75-b392-5d143f6a9eb6") : secret "metallb-memberlist" not found Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.692688 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-metrics-certs\") pod \"speaker-2fslp\" (UID: \"bb7f0287-c0d2-4a75-b392-5d143f6a9eb6\") " pod="metallb-system/speaker-2fslp" Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.921063 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrwjb" event={"ID":"0d120a71-eb76-4e21-bd06-a646961dbebc","Type":"ContainerStarted","Data":"0d2a47fa3482fa364cb83deee8981e3b111972e0603c620a60b625b62d0a0776"} Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.923648 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pdqbz" event={"ID":"8df09342-d1b4-46c8-b073-756f9c26e15b","Type":"ContainerStarted","Data":"f089f047de59b74bf0ec52201034cfe543b1ede7356622c2a0f257d463c8094b"} Jan 30 21:56:51 crc kubenswrapper[4869]: I0130 21:56:51.925308 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-kjnqq" event={"ID":"48ddd270-2e1a-4924-905c-89327f9fd1f4","Type":"ContainerStarted","Data":"8380577d95911675c19a345ef5a51bec9b9064cd57aa6844059c4c1dbba9501f"} Jan 30 21:56:52 crc kubenswrapper[4869]: I0130 21:56:52.703214 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-memberlist\") pod \"speaker-2fslp\" (UID: \"bb7f0287-c0d2-4a75-b392-5d143f6a9eb6\") " pod="metallb-system/speaker-2fslp" Jan 30 21:56:52 crc kubenswrapper[4869]: I0130 21:56:52.707163 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bb7f0287-c0d2-4a75-b392-5d143f6a9eb6-memberlist\") pod \"speaker-2fslp\" (UID: \"bb7f0287-c0d2-4a75-b392-5d143f6a9eb6\") " pod="metallb-system/speaker-2fslp" Jan 30 21:56:52 crc kubenswrapper[4869]: I0130 21:56:52.863229 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2fslp" Jan 30 21:56:52 crc kubenswrapper[4869]: I0130 21:56:52.933557 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2fslp" event={"ID":"bb7f0287-c0d2-4a75-b392-5d143f6a9eb6","Type":"ContainerStarted","Data":"ddded5ebad233e90e458cd3d54ac4e940dab21e0ad003908add6388237028f15"} Jan 30 21:56:52 crc kubenswrapper[4869]: I0130 21:56:52.938254 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-kjnqq" event={"ID":"48ddd270-2e1a-4924-905c-89327f9fd1f4","Type":"ContainerStarted","Data":"af9622e5ad77a4b61d3bca00632821b35d6a2f4e4470b1189553793c2fbbd24c"} Jan 30 21:56:53 crc kubenswrapper[4869]: I0130 21:56:53.947209 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2fslp" event={"ID":"bb7f0287-c0d2-4a75-b392-5d143f6a9eb6","Type":"ContainerStarted","Data":"8e6fa0f40da72cdd7d4fe364b979b17d680a06fb6e80f7f480f04f0a8332588f"} Jan 30 21:57:01 crc kubenswrapper[4869]: I0130 21:57:01.990395 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:57:01 crc kubenswrapper[4869]: I0130 21:57:01.991041 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:57:05 crc kubenswrapper[4869]: I0130 21:57:05.039745 4869 generic.go:334] "Generic (PLEG): container finished" podID="0d120a71-eb76-4e21-bd06-a646961dbebc" containerID="86cae8b4588adade3bc53839b3f9860cc3c938067023787721dfc52320f94562" exitCode=0 Jan 30 21:57:05 crc kubenswrapper[4869]: I0130 21:57:05.039859 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrwjb" event={"ID":"0d120a71-eb76-4e21-bd06-a646961dbebc","Type":"ContainerDied","Data":"86cae8b4588adade3bc53839b3f9860cc3c938067023787721dfc52320f94562"} Jan 30 21:57:05 crc kubenswrapper[4869]: I0130 21:57:05.044525 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pdqbz" event={"ID":"8df09342-d1b4-46c8-b073-756f9c26e15b","Type":"ContainerStarted","Data":"789789a4d59767f79a131f0a9c836fb77d6afd612a4f4bbbce54a26fac50a669"} Jan 30 21:57:05 crc kubenswrapper[4869]: I0130 21:57:05.045116 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pdqbz" Jan 30 21:57:05 crc kubenswrapper[4869]: I0130 21:57:05.047143 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-kjnqq" event={"ID":"48ddd270-2e1a-4924-905c-89327f9fd1f4","Type":"ContainerStarted","Data":"07c10766529edc4110e6a76c3a7dc40b9deb6f1e8d7130f2df52280866a6637d"} Jan 30 21:57:05 crc kubenswrapper[4869]: I0130 21:57:05.047648 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-kjnqq" Jan 30 21:57:05 crc kubenswrapper[4869]: I0130 21:57:05.050510 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-kjnqq" Jan 30 21:57:05 crc kubenswrapper[4869]: I0130 21:57:05.053471 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2fslp" event={"ID":"bb7f0287-c0d2-4a75-b392-5d143f6a9eb6","Type":"ContainerStarted","Data":"e4cd84d8d869485a45bc5a6f2a6f93f1f91d15377aa2d2735fcdba314acbe337"} Jan 30 21:57:05 crc kubenswrapper[4869]: I0130 21:57:05.053706 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-2fslp" Jan 30 21:57:05 crc kubenswrapper[4869]: I0130 21:57:05.058965 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-2fslp" Jan 30 21:57:05 crc kubenswrapper[4869]: I0130 21:57:05.098494 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pdqbz" podStartSLOduration=2.024503753 podStartE2EDuration="15.098476349s" podCreationTimestamp="2026-01-30 21:56:50 +0000 UTC" firstStartedPulling="2026-01-30 21:56:51.488861647 +0000 UTC m=+812.374619672" lastFinishedPulling="2026-01-30 21:57:04.562834243 +0000 UTC m=+825.448592268" observedRunningTime="2026-01-30 21:57:05.092981067 +0000 UTC m=+825.978739102" watchObservedRunningTime="2026-01-30 21:57:05.098476349 +0000 UTC m=+825.984234364" Jan 30 21:57:05 crc kubenswrapper[4869]: I0130 21:57:05.113397 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-kjnqq" podStartSLOduration=1.640230806 podStartE2EDuration="14.113379986s" podCreationTimestamp="2026-01-30 21:56:51 +0000 UTC" firstStartedPulling="2026-01-30 21:56:52.035038393 +0000 UTC m=+812.920796418" lastFinishedPulling="2026-01-30 21:57:04.508187563 +0000 UTC m=+825.393945598" observedRunningTime="2026-01-30 21:57:05.112361055 +0000 UTC m=+825.998119070" watchObservedRunningTime="2026-01-30 21:57:05.113379986 +0000 UTC m=+825.999138011" Jan 30 21:57:06 crc kubenswrapper[4869]: I0130 21:57:06.060204 4869 generic.go:334] "Generic (PLEG): container finished" podID="0d120a71-eb76-4e21-bd06-a646961dbebc" containerID="76aa1515f0edf85d7dce3576128ae4265b3dcfd4ad286e1f24406ea9ba3364ba" exitCode=0 Jan 30 21:57:06 crc kubenswrapper[4869]: I0130 21:57:06.060309 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrwjb" event={"ID":"0d120a71-eb76-4e21-bd06-a646961dbebc","Type":"ContainerDied","Data":"76aa1515f0edf85d7dce3576128ae4265b3dcfd4ad286e1f24406ea9ba3364ba"} Jan 30 21:57:06 crc kubenswrapper[4869]: I0130 21:57:06.084398 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-2fslp" podStartSLOduration=3.887092503 podStartE2EDuration="15.084378692s" podCreationTimestamp="2026-01-30 21:56:51 +0000 UTC" firstStartedPulling="2026-01-30 21:56:53.366221326 +0000 UTC m=+814.251979351" lastFinishedPulling="2026-01-30 21:57:04.563507505 +0000 UTC m=+825.449265540" observedRunningTime="2026-01-30 21:57:05.134420965 +0000 UTC m=+826.020178990" watchObservedRunningTime="2026-01-30 21:57:06.084378692 +0000 UTC m=+826.970136727" Jan 30 21:57:07 crc kubenswrapper[4869]: I0130 21:57:07.068022 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrwjb" event={"ID":"0d120a71-eb76-4e21-bd06-a646961dbebc","Type":"ContainerDied","Data":"dff9bccccfef053c97832de3f78b4a13a1c175a0398b65f43f49c5648c42658e"} Jan 30 21:57:07 crc kubenswrapper[4869]: I0130 21:57:07.068487 4869 generic.go:334] "Generic (PLEG): container finished" podID="0d120a71-eb76-4e21-bd06-a646961dbebc" containerID="dff9bccccfef053c97832de3f78b4a13a1c175a0398b65f43f49c5648c42658e" exitCode=0 Jan 30 21:57:08 crc kubenswrapper[4869]: I0130 21:57:08.079925 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrwjb" event={"ID":"0d120a71-eb76-4e21-bd06-a646961dbebc","Type":"ContainerStarted","Data":"81632ec1512f67edc57acc8972cb02fb465615efe15feae41e96fbba7f7b8c8f"} Jan 30 21:57:08 crc kubenswrapper[4869]: I0130 21:57:08.080258 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrwjb" event={"ID":"0d120a71-eb76-4e21-bd06-a646961dbebc","Type":"ContainerStarted","Data":"de0c89befd3f325be76e8324764bd8053bfa0725121db6c594f400b4fe01d1f5"} Jan 30 21:57:08 crc kubenswrapper[4869]: I0130 21:57:08.080271 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrwjb" event={"ID":"0d120a71-eb76-4e21-bd06-a646961dbebc","Type":"ContainerStarted","Data":"fbe83e645be7339bba55861a1d58adb1901d55b742ac9e230ba670e3d7c705bf"} Jan 30 21:57:08 crc kubenswrapper[4869]: I0130 21:57:08.080280 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrwjb" event={"ID":"0d120a71-eb76-4e21-bd06-a646961dbebc","Type":"ContainerStarted","Data":"a72f9485e55ce8187ea0eeb78b3b590cf823a7374eb54003545f9238c4bdcce7"} Jan 30 21:57:09 crc kubenswrapper[4869]: I0130 21:57:09.089854 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrwjb" event={"ID":"0d120a71-eb76-4e21-bd06-a646961dbebc","Type":"ContainerStarted","Data":"a575af1e46d02e3ad43cc4a4797daa8df3cb35a62c9e68ffd1edc90af8d9799b"} Jan 30 21:57:09 crc kubenswrapper[4869]: I0130 21:57:09.090770 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrwjb" event={"ID":"0d120a71-eb76-4e21-bd06-a646961dbebc","Type":"ContainerStarted","Data":"2e5ad7766633c9c0bcd986021d6aad7c9ee6bfa97b0e3af94bac960ced9d8dec"} Jan 30 21:57:09 crc kubenswrapper[4869]: I0130 21:57:09.090887 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:57:09 crc kubenswrapper[4869]: I0130 21:57:09.114818 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-qrwjb" podStartSLOduration=6.017704723 podStartE2EDuration="19.114803343s" podCreationTimestamp="2026-01-30 21:56:50 +0000 UTC" firstStartedPulling="2026-01-30 21:56:51.501644336 +0000 UTC m=+812.387402351" lastFinishedPulling="2026-01-30 21:57:04.598742926 +0000 UTC m=+825.484500971" observedRunningTime="2026-01-30 21:57:09.111537441 +0000 UTC m=+829.997295476" watchObservedRunningTime="2026-01-30 21:57:09.114803343 +0000 UTC m=+830.000561368" Jan 30 21:57:10 crc kubenswrapper[4869]: I0130 21:57:10.768491 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-rds6c"] Jan 30 21:57:10 crc kubenswrapper[4869]: I0130 21:57:10.769787 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-rds6c" Jan 30 21:57:10 crc kubenswrapper[4869]: I0130 21:57:10.772711 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-9nwhs" Jan 30 21:57:10 crc kubenswrapper[4869]: I0130 21:57:10.773055 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 21:57:10 crc kubenswrapper[4869]: I0130 21:57:10.773288 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 21:57:10 crc kubenswrapper[4869]: I0130 21:57:10.822665 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-rds6c"] Jan 30 21:57:10 crc kubenswrapper[4869]: I0130 21:57:10.884736 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qghcs\" (UniqueName: \"kubernetes.io/projected/59568443-434d-4ebc-b3fb-5ff41435d8c2-kube-api-access-qghcs\") pod \"mariadb-operator-index-rds6c\" (UID: \"59568443-434d-4ebc-b3fb-5ff41435d8c2\") " pod="openstack-operators/mariadb-operator-index-rds6c" Jan 30 21:57:10 crc kubenswrapper[4869]: I0130 21:57:10.986385 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qghcs\" (UniqueName: \"kubernetes.io/projected/59568443-434d-4ebc-b3fb-5ff41435d8c2-kube-api-access-qghcs\") pod \"mariadb-operator-index-rds6c\" (UID: \"59568443-434d-4ebc-b3fb-5ff41435d8c2\") " pod="openstack-operators/mariadb-operator-index-rds6c" Jan 30 21:57:11 crc kubenswrapper[4869]: I0130 21:57:11.011397 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qghcs\" (UniqueName: \"kubernetes.io/projected/59568443-434d-4ebc-b3fb-5ff41435d8c2-kube-api-access-qghcs\") pod \"mariadb-operator-index-rds6c\" (UID: \"59568443-434d-4ebc-b3fb-5ff41435d8c2\") " pod="openstack-operators/mariadb-operator-index-rds6c" Jan 30 21:57:11 crc kubenswrapper[4869]: I0130 21:57:11.086406 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-rds6c" Jan 30 21:57:11 crc kubenswrapper[4869]: I0130 21:57:11.293305 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:57:11 crc kubenswrapper[4869]: I0130 21:57:11.337521 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:57:11 crc kubenswrapper[4869]: I0130 21:57:11.338196 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-rds6c"] Jan 30 21:57:12 crc kubenswrapper[4869]: I0130 21:57:12.104649 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-rds6c" event={"ID":"59568443-434d-4ebc-b3fb-5ff41435d8c2","Type":"ContainerStarted","Data":"547c1a5f9b0da887f993d0d0d6031bc390b24ff4ef388fda31bb5904964cde13"} Jan 30 21:57:13 crc kubenswrapper[4869]: I0130 21:57:13.940594 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-rds6c"] Jan 30 21:57:14 crc kubenswrapper[4869]: I0130 21:57:14.121081 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-rds6c" event={"ID":"59568443-434d-4ebc-b3fb-5ff41435d8c2","Type":"ContainerStarted","Data":"8dc952d50cce87a9cb4c1fa457b4d84acb8f8b0e1dbb05ce0cd194755e9d1ed6"} Jan 30 21:57:14 crc kubenswrapper[4869]: I0130 21:57:14.138695 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-rds6c" podStartSLOduration=1.959359059 podStartE2EDuration="4.138664095s" podCreationTimestamp="2026-01-30 21:57:10 +0000 UTC" firstStartedPulling="2026-01-30 21:57:11.3590523 +0000 UTC m=+832.244810325" lastFinishedPulling="2026-01-30 21:57:13.538357336 +0000 UTC m=+834.424115361" observedRunningTime="2026-01-30 21:57:14.136050183 +0000 UTC m=+835.021808208" watchObservedRunningTime="2026-01-30 21:57:14.138664095 +0000 UTC m=+835.024422120" Jan 30 21:57:14 crc kubenswrapper[4869]: I0130 21:57:14.555743 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-blgg8"] Jan 30 21:57:14 crc kubenswrapper[4869]: I0130 21:57:14.557511 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-blgg8" Jan 30 21:57:14 crc kubenswrapper[4869]: I0130 21:57:14.576326 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-blgg8"] Jan 30 21:57:14 crc kubenswrapper[4869]: I0130 21:57:14.629017 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2c9j\" (UniqueName: \"kubernetes.io/projected/af9e7db8-505e-442f-86a8-791e8196ecc0-kube-api-access-x2c9j\") pod \"mariadb-operator-index-blgg8\" (UID: \"af9e7db8-505e-442f-86a8-791e8196ecc0\") " pod="openstack-operators/mariadb-operator-index-blgg8" Jan 30 21:57:14 crc kubenswrapper[4869]: I0130 21:57:14.731253 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2c9j\" (UniqueName: \"kubernetes.io/projected/af9e7db8-505e-442f-86a8-791e8196ecc0-kube-api-access-x2c9j\") pod \"mariadb-operator-index-blgg8\" (UID: \"af9e7db8-505e-442f-86a8-791e8196ecc0\") " pod="openstack-operators/mariadb-operator-index-blgg8" Jan 30 21:57:14 crc kubenswrapper[4869]: I0130 21:57:14.751106 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2c9j\" (UniqueName: \"kubernetes.io/projected/af9e7db8-505e-442f-86a8-791e8196ecc0-kube-api-access-x2c9j\") pod \"mariadb-operator-index-blgg8\" (UID: \"af9e7db8-505e-442f-86a8-791e8196ecc0\") " pod="openstack-operators/mariadb-operator-index-blgg8" Jan 30 21:57:14 crc kubenswrapper[4869]: I0130 21:57:14.892870 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-blgg8" Jan 30 21:57:15 crc kubenswrapper[4869]: I0130 21:57:15.091860 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-blgg8"] Jan 30 21:57:15 crc kubenswrapper[4869]: I0130 21:57:15.127328 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-blgg8" event={"ID":"af9e7db8-505e-442f-86a8-791e8196ecc0","Type":"ContainerStarted","Data":"2b8af07a50615a54a398ae097dc27e424f3fbbf4bfd6dd9dd561cab500f74b8d"} Jan 30 21:57:15 crc kubenswrapper[4869]: I0130 21:57:15.127374 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-rds6c" podUID="59568443-434d-4ebc-b3fb-5ff41435d8c2" containerName="registry-server" containerID="cri-o://8dc952d50cce87a9cb4c1fa457b4d84acb8f8b0e1dbb05ce0cd194755e9d1ed6" gracePeriod=2 Jan 30 21:57:15 crc kubenswrapper[4869]: I0130 21:57:15.515315 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-rds6c" Jan 30 21:57:15 crc kubenswrapper[4869]: I0130 21:57:15.641461 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qghcs\" (UniqueName: \"kubernetes.io/projected/59568443-434d-4ebc-b3fb-5ff41435d8c2-kube-api-access-qghcs\") pod \"59568443-434d-4ebc-b3fb-5ff41435d8c2\" (UID: \"59568443-434d-4ebc-b3fb-5ff41435d8c2\") " Jan 30 21:57:15 crc kubenswrapper[4869]: I0130 21:57:15.647498 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59568443-434d-4ebc-b3fb-5ff41435d8c2-kube-api-access-qghcs" (OuterVolumeSpecName: "kube-api-access-qghcs") pod "59568443-434d-4ebc-b3fb-5ff41435d8c2" (UID: "59568443-434d-4ebc-b3fb-5ff41435d8c2"). InnerVolumeSpecName "kube-api-access-qghcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:57:15 crc kubenswrapper[4869]: I0130 21:57:15.743290 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qghcs\" (UniqueName: \"kubernetes.io/projected/59568443-434d-4ebc-b3fb-5ff41435d8c2-kube-api-access-qghcs\") on node \"crc\" DevicePath \"\"" Jan 30 21:57:16 crc kubenswrapper[4869]: I0130 21:57:16.140106 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-blgg8" event={"ID":"af9e7db8-505e-442f-86a8-791e8196ecc0","Type":"ContainerStarted","Data":"a270ebc41d25ac141ed114b48c7c0137be9a32d0142c129cba75d19d9c497131"} Jan 30 21:57:16 crc kubenswrapper[4869]: I0130 21:57:16.144129 4869 generic.go:334] "Generic (PLEG): container finished" podID="59568443-434d-4ebc-b3fb-5ff41435d8c2" containerID="8dc952d50cce87a9cb4c1fa457b4d84acb8f8b0e1dbb05ce0cd194755e9d1ed6" exitCode=0 Jan 30 21:57:16 crc kubenswrapper[4869]: I0130 21:57:16.144202 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-rds6c" Jan 30 21:57:16 crc kubenswrapper[4869]: I0130 21:57:16.144210 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-rds6c" event={"ID":"59568443-434d-4ebc-b3fb-5ff41435d8c2","Type":"ContainerDied","Data":"8dc952d50cce87a9cb4c1fa457b4d84acb8f8b0e1dbb05ce0cd194755e9d1ed6"} Jan 30 21:57:16 crc kubenswrapper[4869]: I0130 21:57:16.144376 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-rds6c" event={"ID":"59568443-434d-4ebc-b3fb-5ff41435d8c2","Type":"ContainerDied","Data":"547c1a5f9b0da887f993d0d0d6031bc390b24ff4ef388fda31bb5904964cde13"} Jan 30 21:57:16 crc kubenswrapper[4869]: I0130 21:57:16.144434 4869 scope.go:117] "RemoveContainer" containerID="8dc952d50cce87a9cb4c1fa457b4d84acb8f8b0e1dbb05ce0cd194755e9d1ed6" Jan 30 21:57:16 crc kubenswrapper[4869]: I0130 21:57:16.169498 4869 scope.go:117] "RemoveContainer" containerID="8dc952d50cce87a9cb4c1fa457b4d84acb8f8b0e1dbb05ce0cd194755e9d1ed6" Jan 30 21:57:16 crc kubenswrapper[4869]: E0130 21:57:16.170509 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dc952d50cce87a9cb4c1fa457b4d84acb8f8b0e1dbb05ce0cd194755e9d1ed6\": container with ID starting with 8dc952d50cce87a9cb4c1fa457b4d84acb8f8b0e1dbb05ce0cd194755e9d1ed6 not found: ID does not exist" containerID="8dc952d50cce87a9cb4c1fa457b4d84acb8f8b0e1dbb05ce0cd194755e9d1ed6" Jan 30 21:57:16 crc kubenswrapper[4869]: I0130 21:57:16.170557 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dc952d50cce87a9cb4c1fa457b4d84acb8f8b0e1dbb05ce0cd194755e9d1ed6"} err="failed to get container status \"8dc952d50cce87a9cb4c1fa457b4d84acb8f8b0e1dbb05ce0cd194755e9d1ed6\": rpc error: code = NotFound desc = could not find container \"8dc952d50cce87a9cb4c1fa457b4d84acb8f8b0e1dbb05ce0cd194755e9d1ed6\": container with ID starting with 8dc952d50cce87a9cb4c1fa457b4d84acb8f8b0e1dbb05ce0cd194755e9d1ed6 not found: ID does not exist" Jan 30 21:57:16 crc kubenswrapper[4869]: I0130 21:57:16.182564 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-blgg8" podStartSLOduration=1.5094321179999999 podStartE2EDuration="2.182542034s" podCreationTimestamp="2026-01-30 21:57:14 +0000 UTC" firstStartedPulling="2026-01-30 21:57:15.101000391 +0000 UTC m=+835.986758416" lastFinishedPulling="2026-01-30 21:57:15.774110307 +0000 UTC m=+836.659868332" observedRunningTime="2026-01-30 21:57:16.169483936 +0000 UTC m=+837.055242001" watchObservedRunningTime="2026-01-30 21:57:16.182542034 +0000 UTC m=+837.068300059" Jan 30 21:57:16 crc kubenswrapper[4869]: I0130 21:57:16.186607 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-rds6c"] Jan 30 21:57:16 crc kubenswrapper[4869]: I0130 21:57:16.189846 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-rds6c"] Jan 30 21:57:17 crc kubenswrapper[4869]: I0130 21:57:17.886646 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59568443-434d-4ebc-b3fb-5ff41435d8c2" path="/var/lib/kubelet/pods/59568443-434d-4ebc-b3fb-5ff41435d8c2/volumes" Jan 30 21:57:21 crc kubenswrapper[4869]: I0130 21:57:21.292791 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pdqbz" Jan 30 21:57:21 crc kubenswrapper[4869]: I0130 21:57:21.303329 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-qrwjb" Jan 30 21:57:24 crc kubenswrapper[4869]: I0130 21:57:24.893487 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-blgg8" Jan 30 21:57:24 crc kubenswrapper[4869]: I0130 21:57:24.893803 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-blgg8" Jan 30 21:57:24 crc kubenswrapper[4869]: I0130 21:57:24.925430 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-blgg8" Jan 30 21:57:25 crc kubenswrapper[4869]: I0130 21:57:25.211369 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-blgg8" Jan 30 21:57:30 crc kubenswrapper[4869]: I0130 21:57:30.837529 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd"] Jan 30 21:57:30 crc kubenswrapper[4869]: E0130 21:57:30.838869 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59568443-434d-4ebc-b3fb-5ff41435d8c2" containerName="registry-server" Jan 30 21:57:30 crc kubenswrapper[4869]: I0130 21:57:30.838934 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="59568443-434d-4ebc-b3fb-5ff41435d8c2" containerName="registry-server" Jan 30 21:57:30 crc kubenswrapper[4869]: I0130 21:57:30.839076 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="59568443-434d-4ebc-b3fb-5ff41435d8c2" containerName="registry-server" Jan 30 21:57:30 crc kubenswrapper[4869]: I0130 21:57:30.840354 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd" Jan 30 21:57:30 crc kubenswrapper[4869]: I0130 21:57:30.845310 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jpqpq" Jan 30 21:57:30 crc kubenswrapper[4869]: I0130 21:57:30.853048 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd"] Jan 30 21:57:30 crc kubenswrapper[4869]: I0130 21:57:30.942637 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd\" (UID: \"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd" Jan 30 21:57:30 crc kubenswrapper[4869]: I0130 21:57:30.942689 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mh5p\" (UniqueName: \"kubernetes.io/projected/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2-kube-api-access-5mh5p\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd\" (UID: \"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd" Jan 30 21:57:30 crc kubenswrapper[4869]: I0130 21:57:30.942737 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd\" (UID: \"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd" Jan 30 21:57:31 crc kubenswrapper[4869]: I0130 21:57:31.043437 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd\" (UID: \"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd" Jan 30 21:57:31 crc kubenswrapper[4869]: I0130 21:57:31.043491 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mh5p\" (UniqueName: \"kubernetes.io/projected/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2-kube-api-access-5mh5p\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd\" (UID: \"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd" Jan 30 21:57:31 crc kubenswrapper[4869]: I0130 21:57:31.043519 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd\" (UID: \"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd" Jan 30 21:57:31 crc kubenswrapper[4869]: I0130 21:57:31.043967 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd\" (UID: \"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd" Jan 30 21:57:31 crc kubenswrapper[4869]: I0130 21:57:31.044179 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd\" (UID: \"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd" Jan 30 21:57:31 crc kubenswrapper[4869]: I0130 21:57:31.068226 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mh5p\" (UniqueName: \"kubernetes.io/projected/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2-kube-api-access-5mh5p\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd\" (UID: \"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd" Jan 30 21:57:31 crc kubenswrapper[4869]: I0130 21:57:31.168812 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd" Jan 30 21:57:31 crc kubenswrapper[4869]: I0130 21:57:31.388422 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd"] Jan 30 21:57:31 crc kubenswrapper[4869]: I0130 21:57:31.990759 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:57:31 crc kubenswrapper[4869]: I0130 21:57:31.990840 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:57:31 crc kubenswrapper[4869]: I0130 21:57:31.990911 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 21:57:31 crc kubenswrapper[4869]: I0130 21:57:31.991549 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a7ff76a93ea10c54b4c308e0ca79595cb658f2169be1db3ae81fc5f671455e21"} pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:57:31 crc kubenswrapper[4869]: I0130 21:57:31.991612 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" containerID="cri-o://a7ff76a93ea10c54b4c308e0ca79595cb658f2169be1db3ae81fc5f671455e21" gracePeriod=600 Jan 30 21:57:32 crc kubenswrapper[4869]: I0130 21:57:32.230725 4869 generic.go:334] "Generic (PLEG): container finished" podID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerID="a7ff76a93ea10c54b4c308e0ca79595cb658f2169be1db3ae81fc5f671455e21" exitCode=0 Jan 30 21:57:32 crc kubenswrapper[4869]: I0130 21:57:32.230798 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" event={"ID":"b6fc0664-5e80-440d-a6e8-4189cdf5c500","Type":"ContainerDied","Data":"a7ff76a93ea10c54b4c308e0ca79595cb658f2169be1db3ae81fc5f671455e21"} Jan 30 21:57:32 crc kubenswrapper[4869]: I0130 21:57:32.230861 4869 scope.go:117] "RemoveContainer" containerID="496fc9807f9bce8a450851cd04a8d88568f861a83556f3656c812c3d38022119" Jan 30 21:57:32 crc kubenswrapper[4869]: I0130 21:57:32.233520 4869 generic.go:334] "Generic (PLEG): container finished" podID="ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2" containerID="e1a7767c19f8c6c9622447533b51eb908722efe806cc7054240f362d0b306825" exitCode=0 Jan 30 21:57:32 crc kubenswrapper[4869]: I0130 21:57:32.233591 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd" event={"ID":"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2","Type":"ContainerDied","Data":"e1a7767c19f8c6c9622447533b51eb908722efe806cc7054240f362d0b306825"} Jan 30 21:57:32 crc kubenswrapper[4869]: I0130 21:57:32.233635 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd" event={"ID":"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2","Type":"ContainerStarted","Data":"8d11f0c261215e8a138aef5233a25300b9410b19a611dc3bb539df5003dd8421"} Jan 30 21:57:33 crc kubenswrapper[4869]: I0130 21:57:33.246002 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" event={"ID":"b6fc0664-5e80-440d-a6e8-4189cdf5c500","Type":"ContainerStarted","Data":"6192238ff2265f4c3d1b0ce1e53fb7be4490d1247ceca7a0a3ab91e6567a4b90"} Jan 30 21:57:34 crc kubenswrapper[4869]: I0130 21:57:34.255147 4869 generic.go:334] "Generic (PLEG): container finished" podID="ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2" containerID="1f2f4f3e6f6c27924cc9dc8868d50d7db253eb44104a2c62c54442508e2f2298" exitCode=0 Jan 30 21:57:34 crc kubenswrapper[4869]: I0130 21:57:34.256076 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd" event={"ID":"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2","Type":"ContainerDied","Data":"1f2f4f3e6f6c27924cc9dc8868d50d7db253eb44104a2c62c54442508e2f2298"} Jan 30 21:57:35 crc kubenswrapper[4869]: I0130 21:57:35.267186 4869 generic.go:334] "Generic (PLEG): container finished" podID="ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2" containerID="1bb7e6f5ee43527aa4c53bbb2b1151c568ca6ba344a24a047151be5ac55e9adf" exitCode=0 Jan 30 21:57:35 crc kubenswrapper[4869]: I0130 21:57:35.267243 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd" event={"ID":"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2","Type":"ContainerDied","Data":"1bb7e6f5ee43527aa4c53bbb2b1151c568ca6ba344a24a047151be5ac55e9adf"} Jan 30 21:57:36 crc kubenswrapper[4869]: I0130 21:57:36.531402 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd" Jan 30 21:57:36 crc kubenswrapper[4869]: I0130 21:57:36.624322 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2-util\") pod \"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2\" (UID: \"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2\") " Jan 30 21:57:36 crc kubenswrapper[4869]: I0130 21:57:36.624385 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2-bundle\") pod \"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2\" (UID: \"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2\") " Jan 30 21:57:36 crc kubenswrapper[4869]: I0130 21:57:36.624407 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mh5p\" (UniqueName: \"kubernetes.io/projected/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2-kube-api-access-5mh5p\") pod \"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2\" (UID: \"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2\") " Jan 30 21:57:36 crc kubenswrapper[4869]: I0130 21:57:36.625365 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2-bundle" (OuterVolumeSpecName: "bundle") pod "ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2" (UID: "ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:57:36 crc kubenswrapper[4869]: I0130 21:57:36.632113 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2-kube-api-access-5mh5p" (OuterVolumeSpecName: "kube-api-access-5mh5p") pod "ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2" (UID: "ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2"). InnerVolumeSpecName "kube-api-access-5mh5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:57:36 crc kubenswrapper[4869]: I0130 21:57:36.637968 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2-util" (OuterVolumeSpecName: "util") pod "ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2" (UID: "ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:57:36 crc kubenswrapper[4869]: I0130 21:57:36.726086 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:57:36 crc kubenswrapper[4869]: I0130 21:57:36.726135 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:57:36 crc kubenswrapper[4869]: I0130 21:57:36.726149 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mh5p\" (UniqueName: \"kubernetes.io/projected/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2-kube-api-access-5mh5p\") on node \"crc\" DevicePath \"\"" Jan 30 21:57:37 crc kubenswrapper[4869]: I0130 21:57:37.280439 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd" event={"ID":"ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2","Type":"ContainerDied","Data":"8d11f0c261215e8a138aef5233a25300b9410b19a611dc3bb539df5003dd8421"} Jan 30 21:57:37 crc kubenswrapper[4869]: I0130 21:57:37.280490 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d11f0c261215e8a138aef5233a25300b9410b19a611dc3bb539df5003dd8421" Jan 30 21:57:37 crc kubenswrapper[4869]: I0130 21:57:37.280500 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.074975 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc"] Jan 30 21:57:44 crc kubenswrapper[4869]: E0130 21:57:44.075562 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2" containerName="extract" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.075574 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2" containerName="extract" Jan 30 21:57:44 crc kubenswrapper[4869]: E0130 21:57:44.075586 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2" containerName="util" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.075592 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2" containerName="util" Jan 30 21:57:44 crc kubenswrapper[4869]: E0130 21:57:44.075602 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2" containerName="pull" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.075608 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2" containerName="pull" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.075703 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2" containerName="extract" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.076059 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.077915 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.077925 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-shxsg" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.078117 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.096561 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc"] Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.217174 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54633931-4810-498b-9b01-c800f623a2d4-apiservice-cert\") pod \"mariadb-operator-controller-manager-86cdc6c597-g94qc\" (UID: \"54633931-4810-498b-9b01-c800f623a2d4\") " pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.217218 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t76pr\" (UniqueName: \"kubernetes.io/projected/54633931-4810-498b-9b01-c800f623a2d4-kube-api-access-t76pr\") pod \"mariadb-operator-controller-manager-86cdc6c597-g94qc\" (UID: \"54633931-4810-498b-9b01-c800f623a2d4\") " pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.217243 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54633931-4810-498b-9b01-c800f623a2d4-webhook-cert\") pod \"mariadb-operator-controller-manager-86cdc6c597-g94qc\" (UID: \"54633931-4810-498b-9b01-c800f623a2d4\") " pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.317912 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54633931-4810-498b-9b01-c800f623a2d4-webhook-cert\") pod \"mariadb-operator-controller-manager-86cdc6c597-g94qc\" (UID: \"54633931-4810-498b-9b01-c800f623a2d4\") " pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.318267 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54633931-4810-498b-9b01-c800f623a2d4-apiservice-cert\") pod \"mariadb-operator-controller-manager-86cdc6c597-g94qc\" (UID: \"54633931-4810-498b-9b01-c800f623a2d4\") " pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.318393 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t76pr\" (UniqueName: \"kubernetes.io/projected/54633931-4810-498b-9b01-c800f623a2d4-kube-api-access-t76pr\") pod \"mariadb-operator-controller-manager-86cdc6c597-g94qc\" (UID: \"54633931-4810-498b-9b01-c800f623a2d4\") " pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.327871 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54633931-4810-498b-9b01-c800f623a2d4-webhook-cert\") pod \"mariadb-operator-controller-manager-86cdc6c597-g94qc\" (UID: \"54633931-4810-498b-9b01-c800f623a2d4\") " pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.331503 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54633931-4810-498b-9b01-c800f623a2d4-apiservice-cert\") pod \"mariadb-operator-controller-manager-86cdc6c597-g94qc\" (UID: \"54633931-4810-498b-9b01-c800f623a2d4\") " pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.339983 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t76pr\" (UniqueName: \"kubernetes.io/projected/54633931-4810-498b-9b01-c800f623a2d4-kube-api-access-t76pr\") pod \"mariadb-operator-controller-manager-86cdc6c597-g94qc\" (UID: \"54633931-4810-498b-9b01-c800f623a2d4\") " pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.393296 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" Jan 30 21:57:44 crc kubenswrapper[4869]: I0130 21:57:44.635668 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc"] Jan 30 21:57:45 crc kubenswrapper[4869]: I0130 21:57:45.327715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" event={"ID":"54633931-4810-498b-9b01-c800f623a2d4","Type":"ContainerStarted","Data":"ea72e51bb5a8e3f58ebd8738f80bf2f4e72486c95e23f41ec2325d9c402ec77a"} Jan 30 21:57:49 crc kubenswrapper[4869]: I0130 21:57:49.356906 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" event={"ID":"54633931-4810-498b-9b01-c800f623a2d4","Type":"ContainerStarted","Data":"376da18648cbc1ea030207551d3774a8f23dadf5eeb2cb4efe225172ab00db96"} Jan 30 21:57:49 crc kubenswrapper[4869]: I0130 21:57:49.357504 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" Jan 30 21:57:49 crc kubenswrapper[4869]: I0130 21:57:49.382242 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" podStartSLOduration=1.314912316 podStartE2EDuration="5.382227406s" podCreationTimestamp="2026-01-30 21:57:44 +0000 UTC" firstStartedPulling="2026-01-30 21:57:44.644678378 +0000 UTC m=+865.530436403" lastFinishedPulling="2026-01-30 21:57:48.711993468 +0000 UTC m=+869.597751493" observedRunningTime="2026-01-30 21:57:49.378673366 +0000 UTC m=+870.264431391" watchObservedRunningTime="2026-01-30 21:57:49.382227406 +0000 UTC m=+870.267985431" Jan 30 21:57:54 crc kubenswrapper[4869]: I0130 21:57:54.398163 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" Jan 30 21:57:58 crc kubenswrapper[4869]: I0130 21:57:58.573820 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-cx77f"] Jan 30 21:57:58 crc kubenswrapper[4869]: I0130 21:57:58.574956 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-cx77f" Jan 30 21:57:58 crc kubenswrapper[4869]: I0130 21:57:58.578009 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-nl2xw" Jan 30 21:57:58 crc kubenswrapper[4869]: I0130 21:57:58.598280 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-cx77f"] Jan 30 21:57:58 crc kubenswrapper[4869]: I0130 21:57:58.604253 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwhp2\" (UniqueName: \"kubernetes.io/projected/a79446ac-d496-4509-ab4e-fb25c2a43092-kube-api-access-nwhp2\") pod \"infra-operator-index-cx77f\" (UID: \"a79446ac-d496-4509-ab4e-fb25c2a43092\") " pod="openstack-operators/infra-operator-index-cx77f" Jan 30 21:57:58 crc kubenswrapper[4869]: I0130 21:57:58.704966 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwhp2\" (UniqueName: \"kubernetes.io/projected/a79446ac-d496-4509-ab4e-fb25c2a43092-kube-api-access-nwhp2\") pod \"infra-operator-index-cx77f\" (UID: \"a79446ac-d496-4509-ab4e-fb25c2a43092\") " pod="openstack-operators/infra-operator-index-cx77f" Jan 30 21:57:58 crc kubenswrapper[4869]: I0130 21:57:58.725808 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwhp2\" (UniqueName: \"kubernetes.io/projected/a79446ac-d496-4509-ab4e-fb25c2a43092-kube-api-access-nwhp2\") pod \"infra-operator-index-cx77f\" (UID: \"a79446ac-d496-4509-ab4e-fb25c2a43092\") " pod="openstack-operators/infra-operator-index-cx77f" Jan 30 21:57:58 crc kubenswrapper[4869]: I0130 21:57:58.903806 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-cx77f" Jan 30 21:57:59 crc kubenswrapper[4869]: I0130 21:57:59.167141 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-cx77f"] Jan 30 21:57:59 crc kubenswrapper[4869]: I0130 21:57:59.408195 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-cx77f" event={"ID":"a79446ac-d496-4509-ab4e-fb25c2a43092","Type":"ContainerStarted","Data":"c759353718337f903adbfe70ae550fedec5c702544a75cb046cfa563a4abd0d0"} Jan 30 21:58:02 crc kubenswrapper[4869]: I0130 21:58:02.350686 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-cx77f"] Jan 30 21:58:02 crc kubenswrapper[4869]: I0130 21:58:02.427151 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-cx77f" event={"ID":"a79446ac-d496-4509-ab4e-fb25c2a43092","Type":"ContainerStarted","Data":"35e7e9ab644e5cb01bcdd369127e43f4aaab57230c184af23c41e75e3e08327a"} Jan 30 21:58:02 crc kubenswrapper[4869]: I0130 21:58:02.447082 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-cx77f" podStartSLOduration=1.926613148 podStartE2EDuration="4.447060844s" podCreationTimestamp="2026-01-30 21:57:58 +0000 UTC" firstStartedPulling="2026-01-30 21:57:59.174951902 +0000 UTC m=+880.060709927" lastFinishedPulling="2026-01-30 21:58:01.695399598 +0000 UTC m=+882.581157623" observedRunningTime="2026-01-30 21:58:02.444613018 +0000 UTC m=+883.330371043" watchObservedRunningTime="2026-01-30 21:58:02.447060844 +0000 UTC m=+883.332818879" Jan 30 21:58:02 crc kubenswrapper[4869]: I0130 21:58:02.956806 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-jsvkg"] Jan 30 21:58:02 crc kubenswrapper[4869]: I0130 21:58:02.957975 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-jsvkg" Jan 30 21:58:02 crc kubenswrapper[4869]: I0130 21:58:02.967569 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-jsvkg"] Jan 30 21:58:03 crc kubenswrapper[4869]: I0130 21:58:03.073440 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7w6n\" (UniqueName: \"kubernetes.io/projected/f8e80c91-24a2-43d0-8add-685b3fb41e69-kube-api-access-r7w6n\") pod \"infra-operator-index-jsvkg\" (UID: \"f8e80c91-24a2-43d0-8add-685b3fb41e69\") " pod="openstack-operators/infra-operator-index-jsvkg" Jan 30 21:58:03 crc kubenswrapper[4869]: I0130 21:58:03.175019 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7w6n\" (UniqueName: \"kubernetes.io/projected/f8e80c91-24a2-43d0-8add-685b3fb41e69-kube-api-access-r7w6n\") pod \"infra-operator-index-jsvkg\" (UID: \"f8e80c91-24a2-43d0-8add-685b3fb41e69\") " pod="openstack-operators/infra-operator-index-jsvkg" Jan 30 21:58:03 crc kubenswrapper[4869]: I0130 21:58:03.197144 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7w6n\" (UniqueName: \"kubernetes.io/projected/f8e80c91-24a2-43d0-8add-685b3fb41e69-kube-api-access-r7w6n\") pod \"infra-operator-index-jsvkg\" (UID: \"f8e80c91-24a2-43d0-8add-685b3fb41e69\") " pod="openstack-operators/infra-operator-index-jsvkg" Jan 30 21:58:03 crc kubenswrapper[4869]: I0130 21:58:03.282765 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-jsvkg" Jan 30 21:58:03 crc kubenswrapper[4869]: I0130 21:58:03.433121 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-cx77f" podUID="a79446ac-d496-4509-ab4e-fb25c2a43092" containerName="registry-server" containerID="cri-o://35e7e9ab644e5cb01bcdd369127e43f4aaab57230c184af23c41e75e3e08327a" gracePeriod=2 Jan 30 21:58:03 crc kubenswrapper[4869]: I0130 21:58:03.697123 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-jsvkg"] Jan 30 21:58:03 crc kubenswrapper[4869]: W0130 21:58:03.702283 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8e80c91_24a2_43d0_8add_685b3fb41e69.slice/crio-b42c4582217985f1a6edef7323be0879a99af3eb77add3cb468b295bf9ab3229 WatchSource:0}: Error finding container b42c4582217985f1a6edef7323be0879a99af3eb77add3cb468b295bf9ab3229: Status 404 returned error can't find the container with id b42c4582217985f1a6edef7323be0879a99af3eb77add3cb468b295bf9ab3229 Jan 30 21:58:03 crc kubenswrapper[4869]: I0130 21:58:03.924396 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-cx77f" Jan 30 21:58:03 crc kubenswrapper[4869]: I0130 21:58:03.986792 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwhp2\" (UniqueName: \"kubernetes.io/projected/a79446ac-d496-4509-ab4e-fb25c2a43092-kube-api-access-nwhp2\") pod \"a79446ac-d496-4509-ab4e-fb25c2a43092\" (UID: \"a79446ac-d496-4509-ab4e-fb25c2a43092\") " Jan 30 21:58:03 crc kubenswrapper[4869]: I0130 21:58:03.991754 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a79446ac-d496-4509-ab4e-fb25c2a43092-kube-api-access-nwhp2" (OuterVolumeSpecName: "kube-api-access-nwhp2") pod "a79446ac-d496-4509-ab4e-fb25c2a43092" (UID: "a79446ac-d496-4509-ab4e-fb25c2a43092"). InnerVolumeSpecName "kube-api-access-nwhp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:58:04 crc kubenswrapper[4869]: I0130 21:58:04.088115 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwhp2\" (UniqueName: \"kubernetes.io/projected/a79446ac-d496-4509-ab4e-fb25c2a43092-kube-api-access-nwhp2\") on node \"crc\" DevicePath \"\"" Jan 30 21:58:04 crc kubenswrapper[4869]: I0130 21:58:04.441060 4869 generic.go:334] "Generic (PLEG): container finished" podID="a79446ac-d496-4509-ab4e-fb25c2a43092" containerID="35e7e9ab644e5cb01bcdd369127e43f4aaab57230c184af23c41e75e3e08327a" exitCode=0 Jan 30 21:58:04 crc kubenswrapper[4869]: I0130 21:58:04.441116 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-cx77f" Jan 30 21:58:04 crc kubenswrapper[4869]: I0130 21:58:04.441177 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-cx77f" event={"ID":"a79446ac-d496-4509-ab4e-fb25c2a43092","Type":"ContainerDied","Data":"35e7e9ab644e5cb01bcdd369127e43f4aaab57230c184af23c41e75e3e08327a"} Jan 30 21:58:04 crc kubenswrapper[4869]: I0130 21:58:04.441244 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-cx77f" event={"ID":"a79446ac-d496-4509-ab4e-fb25c2a43092","Type":"ContainerDied","Data":"c759353718337f903adbfe70ae550fedec5c702544a75cb046cfa563a4abd0d0"} Jan 30 21:58:04 crc kubenswrapper[4869]: I0130 21:58:04.441561 4869 scope.go:117] "RemoveContainer" containerID="35e7e9ab644e5cb01bcdd369127e43f4aaab57230c184af23c41e75e3e08327a" Jan 30 21:58:04 crc kubenswrapper[4869]: I0130 21:58:04.442145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-jsvkg" event={"ID":"f8e80c91-24a2-43d0-8add-685b3fb41e69","Type":"ContainerStarted","Data":"b42c4582217985f1a6edef7323be0879a99af3eb77add3cb468b295bf9ab3229"} Jan 30 21:58:04 crc kubenswrapper[4869]: I0130 21:58:04.460692 4869 scope.go:117] "RemoveContainer" containerID="35e7e9ab644e5cb01bcdd369127e43f4aaab57230c184af23c41e75e3e08327a" Jan 30 21:58:04 crc kubenswrapper[4869]: E0130 21:58:04.461037 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35e7e9ab644e5cb01bcdd369127e43f4aaab57230c184af23c41e75e3e08327a\": container with ID starting with 35e7e9ab644e5cb01bcdd369127e43f4aaab57230c184af23c41e75e3e08327a not found: ID does not exist" containerID="35e7e9ab644e5cb01bcdd369127e43f4aaab57230c184af23c41e75e3e08327a" Jan 30 21:58:04 crc kubenswrapper[4869]: I0130 21:58:04.461067 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35e7e9ab644e5cb01bcdd369127e43f4aaab57230c184af23c41e75e3e08327a"} err="failed to get container status \"35e7e9ab644e5cb01bcdd369127e43f4aaab57230c184af23c41e75e3e08327a\": rpc error: code = NotFound desc = could not find container \"35e7e9ab644e5cb01bcdd369127e43f4aaab57230c184af23c41e75e3e08327a\": container with ID starting with 35e7e9ab644e5cb01bcdd369127e43f4aaab57230c184af23c41e75e3e08327a not found: ID does not exist" Jan 30 21:58:04 crc kubenswrapper[4869]: I0130 21:58:04.484106 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-cx77f"] Jan 30 21:58:04 crc kubenswrapper[4869]: I0130 21:58:04.492238 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-cx77f"] Jan 30 21:58:05 crc kubenswrapper[4869]: I0130 21:58:05.885987 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a79446ac-d496-4509-ab4e-fb25c2a43092" path="/var/lib/kubelet/pods/a79446ac-d496-4509-ab4e-fb25c2a43092/volumes" Jan 30 21:58:06 crc kubenswrapper[4869]: I0130 21:58:06.457141 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-jsvkg" event={"ID":"f8e80c91-24a2-43d0-8add-685b3fb41e69","Type":"ContainerStarted","Data":"abf4d6296394e843caa120a853972aeef205a4e9957fd3b14965c4dd732e24b9"} Jan 30 21:58:06 crc kubenswrapper[4869]: I0130 21:58:06.474060 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-jsvkg" podStartSLOduration=3.241206312 podStartE2EDuration="4.474039492s" podCreationTimestamp="2026-01-30 21:58:02 +0000 UTC" firstStartedPulling="2026-01-30 21:58:03.706779576 +0000 UTC m=+884.592537601" lastFinishedPulling="2026-01-30 21:58:04.939612756 +0000 UTC m=+885.825370781" observedRunningTime="2026-01-30 21:58:06.47236577 +0000 UTC m=+887.358123825" watchObservedRunningTime="2026-01-30 21:58:06.474039492 +0000 UTC m=+887.359797517" Jan 30 21:58:13 crc kubenswrapper[4869]: I0130 21:58:13.282961 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-jsvkg" Jan 30 21:58:13 crc kubenswrapper[4869]: I0130 21:58:13.283498 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-jsvkg" Jan 30 21:58:13 crc kubenswrapper[4869]: I0130 21:58:13.307376 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-jsvkg" Jan 30 21:58:13 crc kubenswrapper[4869]: I0130 21:58:13.517069 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-jsvkg" Jan 30 21:58:15 crc kubenswrapper[4869]: I0130 21:58:15.991311 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt"] Jan 30 21:58:15 crc kubenswrapper[4869]: E0130 21:58:15.991868 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a79446ac-d496-4509-ab4e-fb25c2a43092" containerName="registry-server" Jan 30 21:58:15 crc kubenswrapper[4869]: I0130 21:58:15.991885 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79446ac-d496-4509-ab4e-fb25c2a43092" containerName="registry-server" Jan 30 21:58:15 crc kubenswrapper[4869]: I0130 21:58:15.992033 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a79446ac-d496-4509-ab4e-fb25c2a43092" containerName="registry-server" Jan 30 21:58:15 crc kubenswrapper[4869]: I0130 21:58:15.992858 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" Jan 30 21:58:15 crc kubenswrapper[4869]: I0130 21:58:15.994923 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jpqpq" Jan 30 21:58:16 crc kubenswrapper[4869]: I0130 21:58:16.005163 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt"] Jan 30 21:58:16 crc kubenswrapper[4869]: I0130 21:58:16.129998 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffc49835-5a2c-434a-984b-10abf3fe7a55-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt\" (UID: \"ffc49835-5a2c-434a-984b-10abf3fe7a55\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" Jan 30 21:58:16 crc kubenswrapper[4869]: I0130 21:58:16.130080 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24pkz\" (UniqueName: \"kubernetes.io/projected/ffc49835-5a2c-434a-984b-10abf3fe7a55-kube-api-access-24pkz\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt\" (UID: \"ffc49835-5a2c-434a-984b-10abf3fe7a55\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" Jan 30 21:58:16 crc kubenswrapper[4869]: I0130 21:58:16.130170 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffc49835-5a2c-434a-984b-10abf3fe7a55-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt\" (UID: \"ffc49835-5a2c-434a-984b-10abf3fe7a55\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" Jan 30 21:58:16 crc kubenswrapper[4869]: I0130 21:58:16.231185 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffc49835-5a2c-434a-984b-10abf3fe7a55-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt\" (UID: \"ffc49835-5a2c-434a-984b-10abf3fe7a55\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" Jan 30 21:58:16 crc kubenswrapper[4869]: I0130 21:58:16.231263 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffc49835-5a2c-434a-984b-10abf3fe7a55-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt\" (UID: \"ffc49835-5a2c-434a-984b-10abf3fe7a55\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" Jan 30 21:58:16 crc kubenswrapper[4869]: I0130 21:58:16.231293 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24pkz\" (UniqueName: \"kubernetes.io/projected/ffc49835-5a2c-434a-984b-10abf3fe7a55-kube-api-access-24pkz\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt\" (UID: \"ffc49835-5a2c-434a-984b-10abf3fe7a55\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" Jan 30 21:58:16 crc kubenswrapper[4869]: I0130 21:58:16.231758 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffc49835-5a2c-434a-984b-10abf3fe7a55-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt\" (UID: \"ffc49835-5a2c-434a-984b-10abf3fe7a55\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" Jan 30 21:58:16 crc kubenswrapper[4869]: I0130 21:58:16.231810 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffc49835-5a2c-434a-984b-10abf3fe7a55-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt\" (UID: \"ffc49835-5a2c-434a-984b-10abf3fe7a55\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" Jan 30 21:58:16 crc kubenswrapper[4869]: I0130 21:58:16.257557 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24pkz\" (UniqueName: \"kubernetes.io/projected/ffc49835-5a2c-434a-984b-10abf3fe7a55-kube-api-access-24pkz\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt\" (UID: \"ffc49835-5a2c-434a-984b-10abf3fe7a55\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" Jan 30 21:58:16 crc kubenswrapper[4869]: I0130 21:58:16.307708 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" Jan 30 21:58:16 crc kubenswrapper[4869]: I0130 21:58:16.791619 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt"] Jan 30 21:58:17 crc kubenswrapper[4869]: I0130 21:58:17.514289 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" event={"ID":"ffc49835-5a2c-434a-984b-10abf3fe7a55","Type":"ContainerStarted","Data":"075f374a74d264367e35ff42061cd9f043b3cfc3a9cad6906302a578ec1aab73"} Jan 30 21:58:17 crc kubenswrapper[4869]: I0130 21:58:17.514328 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" event={"ID":"ffc49835-5a2c-434a-984b-10abf3fe7a55","Type":"ContainerStarted","Data":"cf3ae5c79968890693e8e23d7eb1df92ed253ed51fd88fc86ec5c20bb7760e9e"} Jan 30 21:58:18 crc kubenswrapper[4869]: I0130 21:58:18.521270 4869 generic.go:334] "Generic (PLEG): container finished" podID="ffc49835-5a2c-434a-984b-10abf3fe7a55" containerID="075f374a74d264367e35ff42061cd9f043b3cfc3a9cad6906302a578ec1aab73" exitCode=0 Jan 30 21:58:18 crc kubenswrapper[4869]: I0130 21:58:18.521346 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" event={"ID":"ffc49835-5a2c-434a-984b-10abf3fe7a55","Type":"ContainerDied","Data":"075f374a74d264367e35ff42061cd9f043b3cfc3a9cad6906302a578ec1aab73"} Jan 30 21:58:19 crc kubenswrapper[4869]: I0130 21:58:19.527711 4869 generic.go:334] "Generic (PLEG): container finished" podID="ffc49835-5a2c-434a-984b-10abf3fe7a55" containerID="a33228a91172479c36e2b830facb599cc42c522f07069eaa1a2c0e581ca09f15" exitCode=0 Jan 30 21:58:19 crc kubenswrapper[4869]: I0130 21:58:19.528134 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" event={"ID":"ffc49835-5a2c-434a-984b-10abf3fe7a55","Type":"ContainerDied","Data":"a33228a91172479c36e2b830facb599cc42c522f07069eaa1a2c0e581ca09f15"} Jan 30 21:58:20 crc kubenswrapper[4869]: I0130 21:58:20.535381 4869 generic.go:334] "Generic (PLEG): container finished" podID="ffc49835-5a2c-434a-984b-10abf3fe7a55" containerID="477cfab7e743ae7cffaef8dcac7ca16dd2d26e05d6beadff5bfa60771c8d491c" exitCode=0 Jan 30 21:58:20 crc kubenswrapper[4869]: I0130 21:58:20.535431 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" event={"ID":"ffc49835-5a2c-434a-984b-10abf3fe7a55","Type":"ContainerDied","Data":"477cfab7e743ae7cffaef8dcac7ca16dd2d26e05d6beadff5bfa60771c8d491c"} Jan 30 21:58:21 crc kubenswrapper[4869]: I0130 21:58:21.794401 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" Jan 30 21:58:21 crc kubenswrapper[4869]: I0130 21:58:21.836657 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24pkz\" (UniqueName: \"kubernetes.io/projected/ffc49835-5a2c-434a-984b-10abf3fe7a55-kube-api-access-24pkz\") pod \"ffc49835-5a2c-434a-984b-10abf3fe7a55\" (UID: \"ffc49835-5a2c-434a-984b-10abf3fe7a55\") " Jan 30 21:58:21 crc kubenswrapper[4869]: I0130 21:58:21.836786 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffc49835-5a2c-434a-984b-10abf3fe7a55-bundle\") pod \"ffc49835-5a2c-434a-984b-10abf3fe7a55\" (UID: \"ffc49835-5a2c-434a-984b-10abf3fe7a55\") " Jan 30 21:58:21 crc kubenswrapper[4869]: I0130 21:58:21.836859 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffc49835-5a2c-434a-984b-10abf3fe7a55-util\") pod \"ffc49835-5a2c-434a-984b-10abf3fe7a55\" (UID: \"ffc49835-5a2c-434a-984b-10abf3fe7a55\") " Jan 30 21:58:21 crc kubenswrapper[4869]: I0130 21:58:21.838482 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffc49835-5a2c-434a-984b-10abf3fe7a55-bundle" (OuterVolumeSpecName: "bundle") pod "ffc49835-5a2c-434a-984b-10abf3fe7a55" (UID: "ffc49835-5a2c-434a-984b-10abf3fe7a55"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:58:21 crc kubenswrapper[4869]: I0130 21:58:21.841942 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc49835-5a2c-434a-984b-10abf3fe7a55-kube-api-access-24pkz" (OuterVolumeSpecName: "kube-api-access-24pkz") pod "ffc49835-5a2c-434a-984b-10abf3fe7a55" (UID: "ffc49835-5a2c-434a-984b-10abf3fe7a55"). InnerVolumeSpecName "kube-api-access-24pkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:58:21 crc kubenswrapper[4869]: I0130 21:58:21.851221 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffc49835-5a2c-434a-984b-10abf3fe7a55-util" (OuterVolumeSpecName: "util") pod "ffc49835-5a2c-434a-984b-10abf3fe7a55" (UID: "ffc49835-5a2c-434a-984b-10abf3fe7a55"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:58:21 crc kubenswrapper[4869]: I0130 21:58:21.944815 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffc49835-5a2c-434a-984b-10abf3fe7a55-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:58:21 crc kubenswrapper[4869]: I0130 21:58:21.944858 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24pkz\" (UniqueName: \"kubernetes.io/projected/ffc49835-5a2c-434a-984b-10abf3fe7a55-kube-api-access-24pkz\") on node \"crc\" DevicePath \"\"" Jan 30 21:58:21 crc kubenswrapper[4869]: I0130 21:58:21.944872 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffc49835-5a2c-434a-984b-10abf3fe7a55-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:58:22 crc kubenswrapper[4869]: I0130 21:58:22.548075 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" event={"ID":"ffc49835-5a2c-434a-984b-10abf3fe7a55","Type":"ContainerDied","Data":"cf3ae5c79968890693e8e23d7eb1df92ed253ed51fd88fc86ec5c20bb7760e9e"} Jan 30 21:58:22 crc kubenswrapper[4869]: I0130 21:58:22.548394 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf3ae5c79968890693e8e23d7eb1df92ed253ed51fd88fc86ec5c20bb7760e9e" Jan 30 21:58:22 crc kubenswrapper[4869]: I0130 21:58:22.548138 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.568732 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 30 21:58:30 crc kubenswrapper[4869]: E0130 21:58:30.569547 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc49835-5a2c-434a-984b-10abf3fe7a55" containerName="pull" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.569562 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc49835-5a2c-434a-984b-10abf3fe7a55" containerName="pull" Jan 30 21:58:30 crc kubenswrapper[4869]: E0130 21:58:30.569573 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc49835-5a2c-434a-984b-10abf3fe7a55" containerName="extract" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.569579 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc49835-5a2c-434a-984b-10abf3fe7a55" containerName="extract" Jan 30 21:58:30 crc kubenswrapper[4869]: E0130 21:58:30.569600 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc49835-5a2c-434a-984b-10abf3fe7a55" containerName="util" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.569607 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc49835-5a2c-434a-984b-10abf3fe7a55" containerName="util" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.569705 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffc49835-5a2c-434a-984b-10abf3fe7a55" containerName="extract" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.570295 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.572494 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"openshift-service-ca.crt" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.572650 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"kube-root-ca.crt" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.572855 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"galera-openstack-dockercfg-6shmm" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.574433 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"openstack-config-data" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.581339 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.582882 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"openstack-scripts" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.583009 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.588588 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.589975 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.592526 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.598625 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.612007 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.650434 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rl4f\" (UniqueName: \"kubernetes.io/projected/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-kube-api-access-6rl4f\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.650488 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-kolla-config\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.650522 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.650552 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-config-data-default\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.650577 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1f14ed92-142f-4e45-8be8-d60ab70d051a-config-data-generated\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.650637 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7hbd\" (UniqueName: \"kubernetes.io/projected/1f14ed92-142f-4e45-8be8-d60ab70d051a-kube-api-access-w7hbd\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.650679 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1f14ed92-142f-4e45-8be8-d60ab70d051a-kolla-config\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.650722 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.650749 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.650778 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f14ed92-142f-4e45-8be8-d60ab70d051a-operator-scripts\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.650857 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.650884 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1f14ed92-142f-4e45-8be8-d60ab70d051a-config-data-default\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.751792 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7dd1249b-e125-4034-a274-8cf26b3e9b3a-config-data-generated\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.752103 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.752185 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1f14ed92-142f-4e45-8be8-d60ab70d051a-config-data-default\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.752252 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7dd1249b-e125-4034-a274-8cf26b3e9b3a-config-data-default\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.752330 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd1249b-e125-4034-a274-8cf26b3e9b3a-operator-scripts\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.752398 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rl4f\" (UniqueName: \"kubernetes.io/projected/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-kube-api-access-6rl4f\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.752465 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-kolla-config\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.752532 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.752613 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-config-data-default\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.752711 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1f14ed92-142f-4e45-8be8-d60ab70d051a-config-data-generated\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.752819 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hw7b\" (UniqueName: \"kubernetes.io/projected/7dd1249b-e125-4034-a274-8cf26b3e9b3a-kube-api-access-6hw7b\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.752920 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.753017 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7hbd\" (UniqueName: \"kubernetes.io/projected/1f14ed92-142f-4e45-8be8-d60ab70d051a-kube-api-access-w7hbd\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.753125 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1f14ed92-142f-4e45-8be8-d60ab70d051a-kolla-config\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.753213 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.753281 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7dd1249b-e125-4034-a274-8cf26b3e9b3a-kolla-config\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.753351 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.753437 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f14ed92-142f-4e45-8be8-d60ab70d051a-operator-scripts\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.755243 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") device mount path \"/mnt/openstack/pv01\"" pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.755258 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-config-data-default\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.755261 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") device mount path \"/mnt/openstack/pv10\"" pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.756505 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1f14ed92-142f-4e45-8be8-d60ab70d051a-config-data-generated\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.756575 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f14ed92-142f-4e45-8be8-d60ab70d051a-operator-scripts\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.756468 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.757312 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1f14ed92-142f-4e45-8be8-d60ab70d051a-kolla-config\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.757413 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.757441 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1f14ed92-142f-4e45-8be8-d60ab70d051a-config-data-default\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.761868 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-kolla-config\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.777518 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7hbd\" (UniqueName: \"kubernetes.io/projected/1f14ed92-142f-4e45-8be8-d60ab70d051a-kube-api-access-w7hbd\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.778223 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-1\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.778310 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rl4f\" (UniqueName: \"kubernetes.io/projected/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-kube-api-access-6rl4f\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.778452 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.854170 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7dd1249b-e125-4034-a274-8cf26b3e9b3a-kolla-config\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.854473 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7dd1249b-e125-4034-a274-8cf26b3e9b3a-config-data-generated\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.854507 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7dd1249b-e125-4034-a274-8cf26b3e9b3a-config-data-default\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.854526 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd1249b-e125-4034-a274-8cf26b3e9b3a-operator-scripts\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.854554 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hw7b\" (UniqueName: \"kubernetes.io/projected/7dd1249b-e125-4034-a274-8cf26b3e9b3a-kube-api-access-6hw7b\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.854573 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.854780 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") device mount path \"/mnt/openstack/pv08\"" pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.855664 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7dd1249b-e125-4034-a274-8cf26b3e9b3a-kolla-config\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.857876 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd1249b-e125-4034-a274-8cf26b3e9b3a-operator-scripts\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.859201 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7dd1249b-e125-4034-a274-8cf26b3e9b3a-config-data-default\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.861442 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7dd1249b-e125-4034-a274-8cf26b3e9b3a-config-data-generated\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.873233 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.874475 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hw7b\" (UniqueName: \"kubernetes.io/projected/7dd1249b-e125-4034-a274-8cf26b3e9b3a-kube-api-access-6hw7b\") pod \"openstack-galera-2\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.885708 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.903414 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:30 crc kubenswrapper[4869]: I0130 21:58:30.918254 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:31 crc kubenswrapper[4869]: I0130 21:58:31.197797 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 30 21:58:31 crc kubenswrapper[4869]: I0130 21:58:31.488858 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 30 21:58:31 crc kubenswrapper[4869]: W0130 21:58:31.494219 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dd1249b_e125_4034_a274_8cf26b3e9b3a.slice/crio-69d59b916ad6489a4eff7fff14575be42d8e0a755d2d88dfdb80731f51ec858e WatchSource:0}: Error finding container 69d59b916ad6489a4eff7fff14575be42d8e0a755d2d88dfdb80731f51ec858e: Status 404 returned error can't find the container with id 69d59b916ad6489a4eff7fff14575be42d8e0a755d2d88dfdb80731f51ec858e Jan 30 21:58:31 crc kubenswrapper[4869]: I0130 21:58:31.501828 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 30 21:58:31 crc kubenswrapper[4869]: I0130 21:58:31.603445 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"3ba1f617-3e3a-4d7c-9374-ed5a271550d3","Type":"ContainerStarted","Data":"6038188a1d9be5ac66be7ae3803d8e9c27f2e2f9e8e1c08d20802fef9bf66524"} Jan 30 21:58:31 crc kubenswrapper[4869]: I0130 21:58:31.604727 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"7dd1249b-e125-4034-a274-8cf26b3e9b3a","Type":"ContainerStarted","Data":"69d59b916ad6489a4eff7fff14575be42d8e0a755d2d88dfdb80731f51ec858e"} Jan 30 21:58:31 crc kubenswrapper[4869]: I0130 21:58:31.605469 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"1f14ed92-142f-4e45-8be8-d60ab70d051a","Type":"ContainerStarted","Data":"a19457677783f04a2c925af6cbe77260d724badd615532e6d323152c1ce425d8"} Jan 30 21:58:32 crc kubenswrapper[4869]: I0130 21:58:32.587075 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6"] Jan 30 21:58:32 crc kubenswrapper[4869]: I0130 21:58:32.588243 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" Jan 30 21:58:32 crc kubenswrapper[4869]: I0130 21:58:32.591271 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Jan 30 21:58:32 crc kubenswrapper[4869]: I0130 21:58:32.591388 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-5tqcz" Jan 30 21:58:32 crc kubenswrapper[4869]: I0130 21:58:32.598634 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6"] Jan 30 21:58:32 crc kubenswrapper[4869]: I0130 21:58:32.787024 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbtqp\" (UniqueName: \"kubernetes.io/projected/b5313756-52d9-4e0c-b328-4fb9609b70a9-kube-api-access-fbtqp\") pod \"infra-operator-controller-manager-748fc89b74-xknf6\" (UID: \"b5313756-52d9-4e0c-b328-4fb9609b70a9\") " pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" Jan 30 21:58:32 crc kubenswrapper[4869]: I0130 21:58:32.787141 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5313756-52d9-4e0c-b328-4fb9609b70a9-webhook-cert\") pod \"infra-operator-controller-manager-748fc89b74-xknf6\" (UID: \"b5313756-52d9-4e0c-b328-4fb9609b70a9\") " pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" Jan 30 21:58:32 crc kubenswrapper[4869]: I0130 21:58:32.787198 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5313756-52d9-4e0c-b328-4fb9609b70a9-apiservice-cert\") pod \"infra-operator-controller-manager-748fc89b74-xknf6\" (UID: \"b5313756-52d9-4e0c-b328-4fb9609b70a9\") " pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" Jan 30 21:58:32 crc kubenswrapper[4869]: I0130 21:58:32.888372 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbtqp\" (UniqueName: \"kubernetes.io/projected/b5313756-52d9-4e0c-b328-4fb9609b70a9-kube-api-access-fbtqp\") pod \"infra-operator-controller-manager-748fc89b74-xknf6\" (UID: \"b5313756-52d9-4e0c-b328-4fb9609b70a9\") " pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" Jan 30 21:58:32 crc kubenswrapper[4869]: I0130 21:58:32.888446 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5313756-52d9-4e0c-b328-4fb9609b70a9-webhook-cert\") pod \"infra-operator-controller-manager-748fc89b74-xknf6\" (UID: \"b5313756-52d9-4e0c-b328-4fb9609b70a9\") " pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" Jan 30 21:58:32 crc kubenswrapper[4869]: I0130 21:58:32.888490 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5313756-52d9-4e0c-b328-4fb9609b70a9-apiservice-cert\") pod \"infra-operator-controller-manager-748fc89b74-xknf6\" (UID: \"b5313756-52d9-4e0c-b328-4fb9609b70a9\") " pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" Jan 30 21:58:32 crc kubenswrapper[4869]: I0130 21:58:32.912166 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5313756-52d9-4e0c-b328-4fb9609b70a9-apiservice-cert\") pod \"infra-operator-controller-manager-748fc89b74-xknf6\" (UID: \"b5313756-52d9-4e0c-b328-4fb9609b70a9\") " pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" Jan 30 21:58:32 crc kubenswrapper[4869]: I0130 21:58:32.912407 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5313756-52d9-4e0c-b328-4fb9609b70a9-webhook-cert\") pod \"infra-operator-controller-manager-748fc89b74-xknf6\" (UID: \"b5313756-52d9-4e0c-b328-4fb9609b70a9\") " pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" Jan 30 21:58:32 crc kubenswrapper[4869]: I0130 21:58:32.916537 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbtqp\" (UniqueName: \"kubernetes.io/projected/b5313756-52d9-4e0c-b328-4fb9609b70a9-kube-api-access-fbtqp\") pod \"infra-operator-controller-manager-748fc89b74-xknf6\" (UID: \"b5313756-52d9-4e0c-b328-4fb9609b70a9\") " pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" Jan 30 21:58:32 crc kubenswrapper[4869]: I0130 21:58:32.971241 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" Jan 30 21:58:33 crc kubenswrapper[4869]: I0130 21:58:33.311635 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6"] Jan 30 21:58:33 crc kubenswrapper[4869]: W0130 21:58:33.313075 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5313756_52d9_4e0c_b328_4fb9609b70a9.slice/crio-200bbfefef6ab015e76dfdd0b5fb0c70d9dc7545be256cdef0dd62daf456a519 WatchSource:0}: Error finding container 200bbfefef6ab015e76dfdd0b5fb0c70d9dc7545be256cdef0dd62daf456a519: Status 404 returned error can't find the container with id 200bbfefef6ab015e76dfdd0b5fb0c70d9dc7545be256cdef0dd62daf456a519 Jan 30 21:58:33 crc kubenswrapper[4869]: I0130 21:58:33.632459 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" event={"ID":"b5313756-52d9-4e0c-b328-4fb9609b70a9","Type":"ContainerStarted","Data":"200bbfefef6ab015e76dfdd0b5fb0c70d9dc7545be256cdef0dd62daf456a519"} Jan 30 21:58:42 crc kubenswrapper[4869]: I0130 21:58:42.707840 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"3ba1f617-3e3a-4d7c-9374-ed5a271550d3","Type":"ContainerStarted","Data":"d70033fca62370f592e349e86ba9143accfc897a1c8b5e66f6cdb064a04cf039"} Jan 30 21:58:42 crc kubenswrapper[4869]: I0130 21:58:42.710169 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" event={"ID":"b5313756-52d9-4e0c-b328-4fb9609b70a9","Type":"ContainerStarted","Data":"eb3228e80004862e9b62c32d689f2f110876e6f1516e4476deb1ca11b0a5f611"} Jan 30 21:58:42 crc kubenswrapper[4869]: I0130 21:58:42.710667 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" Jan 30 21:58:42 crc kubenswrapper[4869]: I0130 21:58:42.711785 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"7dd1249b-e125-4034-a274-8cf26b3e9b3a","Type":"ContainerStarted","Data":"50169bd0b23f35d9a3d9f5783d49c30ef8902b861f0a95e73712f57cb4bf392a"} Jan 30 21:58:42 crc kubenswrapper[4869]: I0130 21:58:42.712819 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"1f14ed92-142f-4e45-8be8-d60ab70d051a","Type":"ContainerStarted","Data":"4ec24aaf084d421fbeda3a5af60f542f56071fd93e4494ef2393cfa41e1d3ebc"} Jan 30 21:58:42 crc kubenswrapper[4869]: I0130 21:58:42.772658 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" podStartSLOduration=1.959031599 podStartE2EDuration="10.772640071s" podCreationTimestamp="2026-01-30 21:58:32 +0000 UTC" firstStartedPulling="2026-01-30 21:58:33.3159625 +0000 UTC m=+914.201720525" lastFinishedPulling="2026-01-30 21:58:42.129570972 +0000 UTC m=+923.015328997" observedRunningTime="2026-01-30 21:58:42.768369227 +0000 UTC m=+923.654127252" watchObservedRunningTime="2026-01-30 21:58:42.772640071 +0000 UTC m=+923.658398096" Jan 30 21:58:46 crc kubenswrapper[4869]: I0130 21:58:46.738697 4869 generic.go:334] "Generic (PLEG): container finished" podID="3ba1f617-3e3a-4d7c-9374-ed5a271550d3" containerID="d70033fca62370f592e349e86ba9143accfc897a1c8b5e66f6cdb064a04cf039" exitCode=0 Jan 30 21:58:46 crc kubenswrapper[4869]: I0130 21:58:46.738782 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"3ba1f617-3e3a-4d7c-9374-ed5a271550d3","Type":"ContainerDied","Data":"d70033fca62370f592e349e86ba9143accfc897a1c8b5e66f6cdb064a04cf039"} Jan 30 21:58:46 crc kubenswrapper[4869]: I0130 21:58:46.743124 4869 generic.go:334] "Generic (PLEG): container finished" podID="7dd1249b-e125-4034-a274-8cf26b3e9b3a" containerID="50169bd0b23f35d9a3d9f5783d49c30ef8902b861f0a95e73712f57cb4bf392a" exitCode=0 Jan 30 21:58:46 crc kubenswrapper[4869]: I0130 21:58:46.743200 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"7dd1249b-e125-4034-a274-8cf26b3e9b3a","Type":"ContainerDied","Data":"50169bd0b23f35d9a3d9f5783d49c30ef8902b861f0a95e73712f57cb4bf392a"} Jan 30 21:58:46 crc kubenswrapper[4869]: I0130 21:58:46.749776 4869 generic.go:334] "Generic (PLEG): container finished" podID="1f14ed92-142f-4e45-8be8-d60ab70d051a" containerID="4ec24aaf084d421fbeda3a5af60f542f56071fd93e4494ef2393cfa41e1d3ebc" exitCode=0 Jan 30 21:58:46 crc kubenswrapper[4869]: I0130 21:58:46.749849 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"1f14ed92-142f-4e45-8be8-d60ab70d051a","Type":"ContainerDied","Data":"4ec24aaf084d421fbeda3a5af60f542f56071fd93e4494ef2393cfa41e1d3ebc"} Jan 30 21:58:47 crc kubenswrapper[4869]: I0130 21:58:47.760277 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"3ba1f617-3e3a-4d7c-9374-ed5a271550d3","Type":"ContainerStarted","Data":"5de45bcb824ff5c4c497841eb42b64600d45ee772a7559447c29f2e805c89361"} Jan 30 21:58:47 crc kubenswrapper[4869]: I0130 21:58:47.763087 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"7dd1249b-e125-4034-a274-8cf26b3e9b3a","Type":"ContainerStarted","Data":"80538292743bdf2763971136905271c50371abe0c0ac9f528cd209927b087ab0"} Jan 30 21:58:47 crc kubenswrapper[4869]: I0130 21:58:47.766038 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"1f14ed92-142f-4e45-8be8-d60ab70d051a","Type":"ContainerStarted","Data":"80160e830354f455ae426d7b3382598dc23003f5f59d64d9953def91c3ca532f"} Jan 30 21:58:47 crc kubenswrapper[4869]: I0130 21:58:47.792689 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/openstack-galera-0" podStartSLOduration=7.777453785 podStartE2EDuration="18.792665037s" podCreationTimestamp="2026-01-30 21:58:29 +0000 UTC" firstStartedPulling="2026-01-30 21:58:31.205953105 +0000 UTC m=+912.091711130" lastFinishedPulling="2026-01-30 21:58:42.221164357 +0000 UTC m=+923.106922382" observedRunningTime="2026-01-30 21:58:47.785691809 +0000 UTC m=+928.671449864" watchObservedRunningTime="2026-01-30 21:58:47.792665037 +0000 UTC m=+928.678423072" Jan 30 21:58:47 crc kubenswrapper[4869]: I0130 21:58:47.813921 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/openstack-galera-1" podStartSLOduration=8.168948293 podStartE2EDuration="18.813885181s" podCreationTimestamp="2026-01-30 21:58:29 +0000 UTC" firstStartedPulling="2026-01-30 21:58:31.507185829 +0000 UTC m=+912.392943854" lastFinishedPulling="2026-01-30 21:58:42.152122717 +0000 UTC m=+923.037880742" observedRunningTime="2026-01-30 21:58:47.806712276 +0000 UTC m=+928.692470341" watchObservedRunningTime="2026-01-30 21:58:47.813885181 +0000 UTC m=+928.699643196" Jan 30 21:58:47 crc kubenswrapper[4869]: I0130 21:58:47.824697 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/openstack-galera-2" podStartSLOduration=8.130644785 podStartE2EDuration="18.824680309s" podCreationTimestamp="2026-01-30 21:58:29 +0000 UTC" firstStartedPulling="2026-01-30 21:58:31.49665406 +0000 UTC m=+912.382412085" lastFinishedPulling="2026-01-30 21:58:42.190689584 +0000 UTC m=+923.076447609" observedRunningTime="2026-01-30 21:58:47.824275926 +0000 UTC m=+928.710033951" watchObservedRunningTime="2026-01-30 21:58:47.824680309 +0000 UTC m=+928.710438334" Jan 30 21:58:50 crc kubenswrapper[4869]: I0130 21:58:50.886197 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:50 crc kubenswrapper[4869]: I0130 21:58:50.887186 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:58:50 crc kubenswrapper[4869]: I0130 21:58:50.905470 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:50 crc kubenswrapper[4869]: I0130 21:58:50.905521 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:50 crc kubenswrapper[4869]: I0130 21:58:50.918865 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:50 crc kubenswrapper[4869]: I0130 21:58:50.918928 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:58:53 crc kubenswrapper[4869]: I0130 21:58:53.006583 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" Jan 30 21:58:53 crc kubenswrapper[4869]: I0130 21:58:53.625302 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:53 crc kubenswrapper[4869]: I0130 21:58:53.691103 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 21:58:57 crc kubenswrapper[4869]: I0130 21:58:57.418319 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 30 21:58:57 crc kubenswrapper[4869]: I0130 21:58:57.419781 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/memcached-0" Jan 30 21:58:57 crc kubenswrapper[4869]: I0130 21:58:57.423794 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"memcached-config-data" Jan 30 21:58:57 crc kubenswrapper[4869]: I0130 21:58:57.424331 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"memcached-memcached-dockercfg-sqfgz" Jan 30 21:58:57 crc kubenswrapper[4869]: I0130 21:58:57.432259 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 30 21:58:57 crc kubenswrapper[4869]: I0130 21:58:57.476646 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7e27b23c-3307-49b1-93be-8188ed81865f-kolla-config\") pod \"memcached-0\" (UID: \"7e27b23c-3307-49b1-93be-8188ed81865f\") " pod="cinder-kuttl-tests/memcached-0" Jan 30 21:58:57 crc kubenswrapper[4869]: I0130 21:58:57.476700 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e27b23c-3307-49b1-93be-8188ed81865f-config-data\") pod \"memcached-0\" (UID: \"7e27b23c-3307-49b1-93be-8188ed81865f\") " pod="cinder-kuttl-tests/memcached-0" Jan 30 21:58:57 crc kubenswrapper[4869]: I0130 21:58:57.476795 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x4vn\" (UniqueName: \"kubernetes.io/projected/7e27b23c-3307-49b1-93be-8188ed81865f-kube-api-access-8x4vn\") pod \"memcached-0\" (UID: \"7e27b23c-3307-49b1-93be-8188ed81865f\") " pod="cinder-kuttl-tests/memcached-0" Jan 30 21:58:57 crc kubenswrapper[4869]: I0130 21:58:57.577585 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7e27b23c-3307-49b1-93be-8188ed81865f-kolla-config\") pod \"memcached-0\" (UID: \"7e27b23c-3307-49b1-93be-8188ed81865f\") " pod="cinder-kuttl-tests/memcached-0" Jan 30 21:58:57 crc kubenswrapper[4869]: I0130 21:58:57.577644 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e27b23c-3307-49b1-93be-8188ed81865f-config-data\") pod \"memcached-0\" (UID: \"7e27b23c-3307-49b1-93be-8188ed81865f\") " pod="cinder-kuttl-tests/memcached-0" Jan 30 21:58:57 crc kubenswrapper[4869]: I0130 21:58:57.577690 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x4vn\" (UniqueName: \"kubernetes.io/projected/7e27b23c-3307-49b1-93be-8188ed81865f-kube-api-access-8x4vn\") pod \"memcached-0\" (UID: \"7e27b23c-3307-49b1-93be-8188ed81865f\") " pod="cinder-kuttl-tests/memcached-0" Jan 30 21:58:57 crc kubenswrapper[4869]: I0130 21:58:57.578636 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7e27b23c-3307-49b1-93be-8188ed81865f-kolla-config\") pod \"memcached-0\" (UID: \"7e27b23c-3307-49b1-93be-8188ed81865f\") " pod="cinder-kuttl-tests/memcached-0" Jan 30 21:58:57 crc kubenswrapper[4869]: I0130 21:58:57.578920 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e27b23c-3307-49b1-93be-8188ed81865f-config-data\") pod \"memcached-0\" (UID: \"7e27b23c-3307-49b1-93be-8188ed81865f\") " pod="cinder-kuttl-tests/memcached-0" Jan 30 21:58:57 crc kubenswrapper[4869]: I0130 21:58:57.595645 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x4vn\" (UniqueName: \"kubernetes.io/projected/7e27b23c-3307-49b1-93be-8188ed81865f-kube-api-access-8x4vn\") pod \"memcached-0\" (UID: \"7e27b23c-3307-49b1-93be-8188ed81865f\") " pod="cinder-kuttl-tests/memcached-0" Jan 30 21:58:57 crc kubenswrapper[4869]: I0130 21:58:57.783467 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/memcached-0" Jan 30 21:58:58 crc kubenswrapper[4869]: I0130 21:58:58.278211 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 30 21:58:58 crc kubenswrapper[4869]: I0130 21:58:58.838914 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/memcached-0" event={"ID":"7e27b23c-3307-49b1-93be-8188ed81865f","Type":"ContainerStarted","Data":"d40eecb1e528e2056bfb7303d1e5dbb38f81afc3c016767319fdca53b3b51a4c"} Jan 30 21:58:59 crc kubenswrapper[4869]: I0130 21:58:59.676910 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/root-account-create-update-jxqxs"] Jan 30 21:58:59 crc kubenswrapper[4869]: I0130 21:58:59.677628 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-jxqxs" Jan 30 21:58:59 crc kubenswrapper[4869]: I0130 21:58:59.686713 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 30 21:58:59 crc kubenswrapper[4869]: I0130 21:58:59.694829 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-jxqxs"] Jan 30 21:58:59 crc kubenswrapper[4869]: I0130 21:58:59.706428 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm2jj\" (UniqueName: \"kubernetes.io/projected/c1aed461-8512-40e1-a0db-0cdc39789a69-kube-api-access-rm2jj\") pod \"root-account-create-update-jxqxs\" (UID: \"c1aed461-8512-40e1-a0db-0cdc39789a69\") " pod="cinder-kuttl-tests/root-account-create-update-jxqxs" Jan 30 21:58:59 crc kubenswrapper[4869]: I0130 21:58:59.706496 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1aed461-8512-40e1-a0db-0cdc39789a69-operator-scripts\") pod \"root-account-create-update-jxqxs\" (UID: \"c1aed461-8512-40e1-a0db-0cdc39789a69\") " pod="cinder-kuttl-tests/root-account-create-update-jxqxs" Jan 30 21:58:59 crc kubenswrapper[4869]: I0130 21:58:59.807607 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1aed461-8512-40e1-a0db-0cdc39789a69-operator-scripts\") pod \"root-account-create-update-jxqxs\" (UID: \"c1aed461-8512-40e1-a0db-0cdc39789a69\") " pod="cinder-kuttl-tests/root-account-create-update-jxqxs" Jan 30 21:58:59 crc kubenswrapper[4869]: I0130 21:58:59.807715 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm2jj\" (UniqueName: \"kubernetes.io/projected/c1aed461-8512-40e1-a0db-0cdc39789a69-kube-api-access-rm2jj\") pod \"root-account-create-update-jxqxs\" (UID: \"c1aed461-8512-40e1-a0db-0cdc39789a69\") " pod="cinder-kuttl-tests/root-account-create-update-jxqxs" Jan 30 21:58:59 crc kubenswrapper[4869]: I0130 21:58:59.808377 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1aed461-8512-40e1-a0db-0cdc39789a69-operator-scripts\") pod \"root-account-create-update-jxqxs\" (UID: \"c1aed461-8512-40e1-a0db-0cdc39789a69\") " pod="cinder-kuttl-tests/root-account-create-update-jxqxs" Jan 30 21:58:59 crc kubenswrapper[4869]: I0130 21:58:59.847652 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm2jj\" (UniqueName: \"kubernetes.io/projected/c1aed461-8512-40e1-a0db-0cdc39789a69-kube-api-access-rm2jj\") pod \"root-account-create-update-jxqxs\" (UID: \"c1aed461-8512-40e1-a0db-0cdc39789a69\") " pod="cinder-kuttl-tests/root-account-create-update-jxqxs" Jan 30 21:58:59 crc kubenswrapper[4869]: I0130 21:58:59.995694 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-jxqxs" Jan 30 21:59:00 crc kubenswrapper[4869]: I0130 21:59:00.374123 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-z4bx2"] Jan 30 21:59:00 crc kubenswrapper[4869]: I0130 21:59:00.376126 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-z4bx2" Jan 30 21:59:00 crc kubenswrapper[4869]: I0130 21:59:00.378535 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-jhxj2" Jan 30 21:59:00 crc kubenswrapper[4869]: I0130 21:59:00.389170 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-z4bx2"] Jan 30 21:59:00 crc kubenswrapper[4869]: I0130 21:59:00.427074 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnxsj\" (UniqueName: \"kubernetes.io/projected/b305097a-8a5c-4971-bb7a-9dea98f1a577-kube-api-access-jnxsj\") pod \"rabbitmq-cluster-operator-index-z4bx2\" (UID: \"b305097a-8a5c-4971-bb7a-9dea98f1a577\") " pod="openstack-operators/rabbitmq-cluster-operator-index-z4bx2" Jan 30 21:59:00 crc kubenswrapper[4869]: W0130 21:59:00.498942 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1aed461_8512_40e1_a0db_0cdc39789a69.slice/crio-1d8143d5c022ab6760e273e78d6af4ec1173de01c46e09ac8ec833fa1d401661 WatchSource:0}: Error finding container 1d8143d5c022ab6760e273e78d6af4ec1173de01c46e09ac8ec833fa1d401661: Status 404 returned error can't find the container with id 1d8143d5c022ab6760e273e78d6af4ec1173de01c46e09ac8ec833fa1d401661 Jan 30 21:59:00 crc kubenswrapper[4869]: I0130 21:59:00.504620 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-jxqxs"] Jan 30 21:59:00 crc kubenswrapper[4869]: I0130 21:59:00.528277 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnxsj\" (UniqueName: \"kubernetes.io/projected/b305097a-8a5c-4971-bb7a-9dea98f1a577-kube-api-access-jnxsj\") pod \"rabbitmq-cluster-operator-index-z4bx2\" (UID: \"b305097a-8a5c-4971-bb7a-9dea98f1a577\") " pod="openstack-operators/rabbitmq-cluster-operator-index-z4bx2" Jan 30 21:59:00 crc kubenswrapper[4869]: I0130 21:59:00.563152 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnxsj\" (UniqueName: \"kubernetes.io/projected/b305097a-8a5c-4971-bb7a-9dea98f1a577-kube-api-access-jnxsj\") pod \"rabbitmq-cluster-operator-index-z4bx2\" (UID: \"b305097a-8a5c-4971-bb7a-9dea98f1a577\") " pod="openstack-operators/rabbitmq-cluster-operator-index-z4bx2" Jan 30 21:59:00 crc kubenswrapper[4869]: I0130 21:59:00.712728 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-z4bx2" Jan 30 21:59:00 crc kubenswrapper[4869]: I0130 21:59:00.850330 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/root-account-create-update-jxqxs" event={"ID":"c1aed461-8512-40e1-a0db-0cdc39789a69","Type":"ContainerStarted","Data":"eca79613af9ac95a5f7ff5c974f679d37331b818e017ddd0e1942adf923e5e74"} Jan 30 21:59:00 crc kubenswrapper[4869]: I0130 21:59:00.850391 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/root-account-create-update-jxqxs" event={"ID":"c1aed461-8512-40e1-a0db-0cdc39789a69","Type":"ContainerStarted","Data":"1d8143d5c022ab6760e273e78d6af4ec1173de01c46e09ac8ec833fa1d401661"} Jan 30 21:59:00 crc kubenswrapper[4869]: I0130 21:59:00.886383 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/root-account-create-update-jxqxs" podStartSLOduration=1.886353145 podStartE2EDuration="1.886353145s" podCreationTimestamp="2026-01-30 21:58:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:59:00.885547011 +0000 UTC m=+941.771305036" watchObservedRunningTime="2026-01-30 21:59:00.886353145 +0000 UTC m=+941.772111170" Jan 30 21:59:01 crc kubenswrapper[4869]: I0130 21:59:01.209003 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/openstack-galera-2" podUID="7dd1249b-e125-4034-a274-8cf26b3e9b3a" containerName="galera" probeResult="failure" output=< Jan 30 21:59:01 crc kubenswrapper[4869]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Jan 30 21:59:01 crc kubenswrapper[4869]: > Jan 30 21:59:01 crc kubenswrapper[4869]: I0130 21:59:01.236002 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-z4bx2"] Jan 30 21:59:01 crc kubenswrapper[4869]: I0130 21:59:01.858001 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-z4bx2" event={"ID":"b305097a-8a5c-4971-bb7a-9dea98f1a577","Type":"ContainerStarted","Data":"eb24d9082df47bc58e3c1e4c09358499602bc1e8feb09c629a3f43f62a823bec"} Jan 30 21:59:03 crc kubenswrapper[4869]: I0130 21:59:03.872547 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/memcached-0" event={"ID":"7e27b23c-3307-49b1-93be-8188ed81865f","Type":"ContainerStarted","Data":"8018f5a2698d9d5047d7754e3fee6586b0f67a2a8d2166d8b425354078600dc0"} Jan 30 21:59:03 crc kubenswrapper[4869]: I0130 21:59:03.873079 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/memcached-0" Jan 30 21:59:03 crc kubenswrapper[4869]: I0130 21:59:03.881370 4869 generic.go:334] "Generic (PLEG): container finished" podID="c1aed461-8512-40e1-a0db-0cdc39789a69" containerID="eca79613af9ac95a5f7ff5c974f679d37331b818e017ddd0e1942adf923e5e74" exitCode=0 Jan 30 21:59:03 crc kubenswrapper[4869]: I0130 21:59:03.884717 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/root-account-create-update-jxqxs" event={"ID":"c1aed461-8512-40e1-a0db-0cdc39789a69","Type":"ContainerDied","Data":"eca79613af9ac95a5f7ff5c974f679d37331b818e017ddd0e1942adf923e5e74"} Jan 30 21:59:03 crc kubenswrapper[4869]: I0130 21:59:03.886949 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/memcached-0" podStartSLOduration=2.237498161 podStartE2EDuration="6.886934533s" podCreationTimestamp="2026-01-30 21:58:57 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.283858154 +0000 UTC m=+939.169616179" lastFinishedPulling="2026-01-30 21:59:02.933294516 +0000 UTC m=+943.819052551" observedRunningTime="2026-01-30 21:59:03.885116655 +0000 UTC m=+944.770874700" watchObservedRunningTime="2026-01-30 21:59:03.886934533 +0000 UTC m=+944.772692568" Jan 30 21:59:04 crc kubenswrapper[4869]: I0130 21:59:04.547593 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-z4bx2"] Jan 30 21:59:05 crc kubenswrapper[4869]: I0130 21:59:05.156842 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-rzlf9"] Jan 30 21:59:05 crc kubenswrapper[4869]: I0130 21:59:05.157814 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-rzlf9" Jan 30 21:59:05 crc kubenswrapper[4869]: I0130 21:59:05.176185 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-rzlf9"] Jan 30 21:59:05 crc kubenswrapper[4869]: I0130 21:59:05.195360 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxtk2\" (UniqueName: \"kubernetes.io/projected/de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a-kube-api-access-sxtk2\") pod \"rabbitmq-cluster-operator-index-rzlf9\" (UID: \"de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a\") " pod="openstack-operators/rabbitmq-cluster-operator-index-rzlf9" Jan 30 21:59:05 crc kubenswrapper[4869]: I0130 21:59:05.296484 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxtk2\" (UniqueName: \"kubernetes.io/projected/de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a-kube-api-access-sxtk2\") pod \"rabbitmq-cluster-operator-index-rzlf9\" (UID: \"de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a\") " pod="openstack-operators/rabbitmq-cluster-operator-index-rzlf9" Jan 30 21:59:05 crc kubenswrapper[4869]: I0130 21:59:05.321360 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxtk2\" (UniqueName: \"kubernetes.io/projected/de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a-kube-api-access-sxtk2\") pod \"rabbitmq-cluster-operator-index-rzlf9\" (UID: \"de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a\") " pod="openstack-operators/rabbitmq-cluster-operator-index-rzlf9" Jan 30 21:59:05 crc kubenswrapper[4869]: I0130 21:59:05.361056 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-jxqxs" Jan 30 21:59:05 crc kubenswrapper[4869]: I0130 21:59:05.478996 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-rzlf9" Jan 30 21:59:05 crc kubenswrapper[4869]: I0130 21:59:05.509769 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1aed461-8512-40e1-a0db-0cdc39789a69-operator-scripts\") pod \"c1aed461-8512-40e1-a0db-0cdc39789a69\" (UID: \"c1aed461-8512-40e1-a0db-0cdc39789a69\") " Jan 30 21:59:05 crc kubenswrapper[4869]: I0130 21:59:05.509822 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm2jj\" (UniqueName: \"kubernetes.io/projected/c1aed461-8512-40e1-a0db-0cdc39789a69-kube-api-access-rm2jj\") pod \"c1aed461-8512-40e1-a0db-0cdc39789a69\" (UID: \"c1aed461-8512-40e1-a0db-0cdc39789a69\") " Jan 30 21:59:05 crc kubenswrapper[4869]: I0130 21:59:05.510138 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1aed461-8512-40e1-a0db-0cdc39789a69-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1aed461-8512-40e1-a0db-0cdc39789a69" (UID: "c1aed461-8512-40e1-a0db-0cdc39789a69"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:59:05 crc kubenswrapper[4869]: I0130 21:59:05.524163 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1aed461-8512-40e1-a0db-0cdc39789a69-kube-api-access-rm2jj" (OuterVolumeSpecName: "kube-api-access-rm2jj") pod "c1aed461-8512-40e1-a0db-0cdc39789a69" (UID: "c1aed461-8512-40e1-a0db-0cdc39789a69"). InnerVolumeSpecName "kube-api-access-rm2jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:59:05 crc kubenswrapper[4869]: I0130 21:59:05.610858 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1aed461-8512-40e1-a0db-0cdc39789a69-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:59:05 crc kubenswrapper[4869]: I0130 21:59:05.610911 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm2jj\" (UniqueName: \"kubernetes.io/projected/c1aed461-8512-40e1-a0db-0cdc39789a69-kube-api-access-rm2jj\") on node \"crc\" DevicePath \"\"" Jan 30 21:59:05 crc kubenswrapper[4869]: I0130 21:59:05.900746 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/root-account-create-update-jxqxs" event={"ID":"c1aed461-8512-40e1-a0db-0cdc39789a69","Type":"ContainerDied","Data":"1d8143d5c022ab6760e273e78d6af4ec1173de01c46e09ac8ec833fa1d401661"} Jan 30 21:59:05 crc kubenswrapper[4869]: I0130 21:59:05.901125 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d8143d5c022ab6760e273e78d6af4ec1173de01c46e09ac8ec833fa1d401661" Jan 30 21:59:05 crc kubenswrapper[4869]: I0130 21:59:05.900860 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-jxqxs" Jan 30 21:59:06 crc kubenswrapper[4869]: I0130 21:59:06.909432 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-z4bx2" event={"ID":"b305097a-8a5c-4971-bb7a-9dea98f1a577","Type":"ContainerStarted","Data":"f2dc5d7c8058f18f91a2d6afb383c71c6782e560e884cd120807536fc149722f"} Jan 30 21:59:06 crc kubenswrapper[4869]: I0130 21:59:06.909682 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-z4bx2" podUID="b305097a-8a5c-4971-bb7a-9dea98f1a577" containerName="registry-server" containerID="cri-o://f2dc5d7c8058f18f91a2d6afb383c71c6782e560e884cd120807536fc149722f" gracePeriod=2 Jan 30 21:59:06 crc kubenswrapper[4869]: I0130 21:59:06.929348 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-z4bx2" podStartSLOduration=1.564426089 podStartE2EDuration="6.929330287s" podCreationTimestamp="2026-01-30 21:59:00 +0000 UTC" firstStartedPulling="2026-01-30 21:59:01.251369105 +0000 UTC m=+942.137127120" lastFinishedPulling="2026-01-30 21:59:06.616273293 +0000 UTC m=+947.502031318" observedRunningTime="2026-01-30 21:59:06.924786295 +0000 UTC m=+947.810544310" watchObservedRunningTime="2026-01-30 21:59:06.929330287 +0000 UTC m=+947.815088312" Jan 30 21:59:06 crc kubenswrapper[4869]: I0130 21:59:06.959563 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-rzlf9"] Jan 30 21:59:07 crc kubenswrapper[4869]: I0130 21:59:07.059420 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:59:07 crc kubenswrapper[4869]: I0130 21:59:07.157021 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 21:59:07 crc kubenswrapper[4869]: I0130 21:59:07.338842 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-z4bx2" Jan 30 21:59:07 crc kubenswrapper[4869]: I0130 21:59:07.439694 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnxsj\" (UniqueName: \"kubernetes.io/projected/b305097a-8a5c-4971-bb7a-9dea98f1a577-kube-api-access-jnxsj\") pod \"b305097a-8a5c-4971-bb7a-9dea98f1a577\" (UID: \"b305097a-8a5c-4971-bb7a-9dea98f1a577\") " Jan 30 21:59:07 crc kubenswrapper[4869]: I0130 21:59:07.444567 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b305097a-8a5c-4971-bb7a-9dea98f1a577-kube-api-access-jnxsj" (OuterVolumeSpecName: "kube-api-access-jnxsj") pod "b305097a-8a5c-4971-bb7a-9dea98f1a577" (UID: "b305097a-8a5c-4971-bb7a-9dea98f1a577"). InnerVolumeSpecName "kube-api-access-jnxsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:59:07 crc kubenswrapper[4869]: I0130 21:59:07.541074 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnxsj\" (UniqueName: \"kubernetes.io/projected/b305097a-8a5c-4971-bb7a-9dea98f1a577-kube-api-access-jnxsj\") on node \"crc\" DevicePath \"\"" Jan 30 21:59:07 crc kubenswrapper[4869]: I0130 21:59:07.917964 4869 generic.go:334] "Generic (PLEG): container finished" podID="b305097a-8a5c-4971-bb7a-9dea98f1a577" containerID="f2dc5d7c8058f18f91a2d6afb383c71c6782e560e884cd120807536fc149722f" exitCode=0 Jan 30 21:59:07 crc kubenswrapper[4869]: I0130 21:59:07.918038 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-z4bx2" Jan 30 21:59:07 crc kubenswrapper[4869]: I0130 21:59:07.918065 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-z4bx2" event={"ID":"b305097a-8a5c-4971-bb7a-9dea98f1a577","Type":"ContainerDied","Data":"f2dc5d7c8058f18f91a2d6afb383c71c6782e560e884cd120807536fc149722f"} Jan 30 21:59:07 crc kubenswrapper[4869]: I0130 21:59:07.918495 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-z4bx2" event={"ID":"b305097a-8a5c-4971-bb7a-9dea98f1a577","Type":"ContainerDied","Data":"eb24d9082df47bc58e3c1e4c09358499602bc1e8feb09c629a3f43f62a823bec"} Jan 30 21:59:07 crc kubenswrapper[4869]: I0130 21:59:07.918519 4869 scope.go:117] "RemoveContainer" containerID="f2dc5d7c8058f18f91a2d6afb383c71c6782e560e884cd120807536fc149722f" Jan 30 21:59:07 crc kubenswrapper[4869]: I0130 21:59:07.921495 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-rzlf9" event={"ID":"de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a","Type":"ContainerStarted","Data":"937d11294624b3c3ef6e8838dcb5b80e6352781b7ec1a83ca26d3031998997a6"} Jan 30 21:59:07 crc kubenswrapper[4869]: I0130 21:59:07.921543 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-rzlf9" event={"ID":"de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a","Type":"ContainerStarted","Data":"6a4808ae097d9d95d0a01fc236639b2f79f44eadbd4cb196a626551c8cd7b4ef"} Jan 30 21:59:07 crc kubenswrapper[4869]: I0130 21:59:07.943582 4869 scope.go:117] "RemoveContainer" containerID="f2dc5d7c8058f18f91a2d6afb383c71c6782e560e884cd120807536fc149722f" Jan 30 21:59:07 crc kubenswrapper[4869]: E0130 21:59:07.944191 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2dc5d7c8058f18f91a2d6afb383c71c6782e560e884cd120807536fc149722f\": container with ID starting with f2dc5d7c8058f18f91a2d6afb383c71c6782e560e884cd120807536fc149722f not found: ID does not exist" containerID="f2dc5d7c8058f18f91a2d6afb383c71c6782e560e884cd120807536fc149722f" Jan 30 21:59:07 crc kubenswrapper[4869]: I0130 21:59:07.944235 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2dc5d7c8058f18f91a2d6afb383c71c6782e560e884cd120807536fc149722f"} err="failed to get container status \"f2dc5d7c8058f18f91a2d6afb383c71c6782e560e884cd120807536fc149722f\": rpc error: code = NotFound desc = could not find container \"f2dc5d7c8058f18f91a2d6afb383c71c6782e560e884cd120807536fc149722f\": container with ID starting with f2dc5d7c8058f18f91a2d6afb383c71c6782e560e884cd120807536fc149722f not found: ID does not exist" Jan 30 21:59:07 crc kubenswrapper[4869]: I0130 21:59:07.947756 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-rzlf9" podStartSLOduration=2.232633486 podStartE2EDuration="2.947739649s" podCreationTimestamp="2026-01-30 21:59:05 +0000 UTC" firstStartedPulling="2026-01-30 21:59:06.984554315 +0000 UTC m=+947.870312340" lastFinishedPulling="2026-01-30 21:59:07.699660458 +0000 UTC m=+948.585418503" observedRunningTime="2026-01-30 21:59:07.947082668 +0000 UTC m=+948.832840693" watchObservedRunningTime="2026-01-30 21:59:07.947739649 +0000 UTC m=+948.833497674" Jan 30 21:59:07 crc kubenswrapper[4869]: I0130 21:59:07.965383 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-z4bx2"] Jan 30 21:59:07 crc kubenswrapper[4869]: I0130 21:59:07.976962 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-z4bx2"] Jan 30 21:59:09 crc kubenswrapper[4869]: I0130 21:59:09.523398 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:59:09 crc kubenswrapper[4869]: I0130 21:59:09.609994 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 21:59:09 crc kubenswrapper[4869]: I0130 21:59:09.883444 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b305097a-8a5c-4971-bb7a-9dea98f1a577" path="/var/lib/kubelet/pods/b305097a-8a5c-4971-bb7a-9dea98f1a577/volumes" Jan 30 21:59:12 crc kubenswrapper[4869]: I0130 21:59:12.784390 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/memcached-0" Jan 30 21:59:15 crc kubenswrapper[4869]: I0130 21:59:15.479312 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-rzlf9" Jan 30 21:59:15 crc kubenswrapper[4869]: I0130 21:59:15.479622 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-rzlf9" Jan 30 21:59:15 crc kubenswrapper[4869]: I0130 21:59:15.503256 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-rzlf9" Jan 30 21:59:15 crc kubenswrapper[4869]: I0130 21:59:15.993424 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-rzlf9" Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.189684 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt"] Jan 30 21:59:18 crc kubenswrapper[4869]: E0130 21:59:18.190209 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1aed461-8512-40e1-a0db-0cdc39789a69" containerName="mariadb-account-create-update" Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.190220 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1aed461-8512-40e1-a0db-0cdc39789a69" containerName="mariadb-account-create-update" Jan 30 21:59:18 crc kubenswrapper[4869]: E0130 21:59:18.190233 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b305097a-8a5c-4971-bb7a-9dea98f1a577" containerName="registry-server" Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.190239 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b305097a-8a5c-4971-bb7a-9dea98f1a577" containerName="registry-server" Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.190375 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b305097a-8a5c-4971-bb7a-9dea98f1a577" containerName="registry-server" Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.190394 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1aed461-8512-40e1-a0db-0cdc39789a69" containerName="mariadb-account-create-update" Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.191343 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt" Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.194381 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jpqpq" Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.204128 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt"] Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.267668 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx4nv\" (UniqueName: \"kubernetes.io/projected/bafac8d6-5853-4f41-a2e0-b24fbd2a533d-kube-api-access-vx4nv\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt\" (UID: \"bafac8d6-5853-4f41-a2e0-b24fbd2a533d\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt" Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.267735 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bafac8d6-5853-4f41-a2e0-b24fbd2a533d-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt\" (UID: \"bafac8d6-5853-4f41-a2e0-b24fbd2a533d\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt" Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.267783 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bafac8d6-5853-4f41-a2e0-b24fbd2a533d-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt\" (UID: \"bafac8d6-5853-4f41-a2e0-b24fbd2a533d\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt" Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.369366 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx4nv\" (UniqueName: \"kubernetes.io/projected/bafac8d6-5853-4f41-a2e0-b24fbd2a533d-kube-api-access-vx4nv\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt\" (UID: \"bafac8d6-5853-4f41-a2e0-b24fbd2a533d\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt" Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.369434 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bafac8d6-5853-4f41-a2e0-b24fbd2a533d-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt\" (UID: \"bafac8d6-5853-4f41-a2e0-b24fbd2a533d\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt" Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.369480 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bafac8d6-5853-4f41-a2e0-b24fbd2a533d-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt\" (UID: \"bafac8d6-5853-4f41-a2e0-b24fbd2a533d\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt" Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.369885 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bafac8d6-5853-4f41-a2e0-b24fbd2a533d-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt\" (UID: \"bafac8d6-5853-4f41-a2e0-b24fbd2a533d\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt" Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.370007 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bafac8d6-5853-4f41-a2e0-b24fbd2a533d-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt\" (UID: \"bafac8d6-5853-4f41-a2e0-b24fbd2a533d\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt" Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.390081 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx4nv\" (UniqueName: \"kubernetes.io/projected/bafac8d6-5853-4f41-a2e0-b24fbd2a533d-kube-api-access-vx4nv\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt\" (UID: \"bafac8d6-5853-4f41-a2e0-b24fbd2a533d\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt" Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.507306 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt" Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.912663 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt"] Jan 30 21:59:18 crc kubenswrapper[4869]: I0130 21:59:18.985636 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt" event={"ID":"bafac8d6-5853-4f41-a2e0-b24fbd2a533d","Type":"ContainerStarted","Data":"c9ccf581ff1af12487f78a961665d74f8902da83517b4be001dbae59686b30d3"} Jan 30 21:59:19 crc kubenswrapper[4869]: I0130 21:59:19.992401 4869 generic.go:334] "Generic (PLEG): container finished" podID="bafac8d6-5853-4f41-a2e0-b24fbd2a533d" containerID="7c0f612d2cb5372b8d43d893cfb72ba389ec13907851a5b3cc6c66677d3dd5d0" exitCode=0 Jan 30 21:59:19 crc kubenswrapper[4869]: I0130 21:59:19.992442 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt" event={"ID":"bafac8d6-5853-4f41-a2e0-b24fbd2a533d","Type":"ContainerDied","Data":"7c0f612d2cb5372b8d43d893cfb72ba389ec13907851a5b3cc6c66677d3dd5d0"} Jan 30 21:59:21 crc kubenswrapper[4869]: I0130 21:59:20.999545 4869 generic.go:334] "Generic (PLEG): container finished" podID="bafac8d6-5853-4f41-a2e0-b24fbd2a533d" containerID="f44ea8bbb40d5ad79caab596094bb2225e4866c149dbdb45596a79a3c1fe6beb" exitCode=0 Jan 30 21:59:21 crc kubenswrapper[4869]: I0130 21:59:20.999588 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt" event={"ID":"bafac8d6-5853-4f41-a2e0-b24fbd2a533d","Type":"ContainerDied","Data":"f44ea8bbb40d5ad79caab596094bb2225e4866c149dbdb45596a79a3c1fe6beb"} Jan 30 21:59:22 crc kubenswrapper[4869]: I0130 21:59:22.007244 4869 generic.go:334] "Generic (PLEG): container finished" podID="bafac8d6-5853-4f41-a2e0-b24fbd2a533d" containerID="1e2dd1dc956e93448f8b4b7e4c916967b3c8a5f37a4661a5081e243a03675a04" exitCode=0 Jan 30 21:59:22 crc kubenswrapper[4869]: I0130 21:59:22.007295 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt" event={"ID":"bafac8d6-5853-4f41-a2e0-b24fbd2a533d","Type":"ContainerDied","Data":"1e2dd1dc956e93448f8b4b7e4c916967b3c8a5f37a4661a5081e243a03675a04"} Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.301685 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.433285 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bafac8d6-5853-4f41-a2e0-b24fbd2a533d-util\") pod \"bafac8d6-5853-4f41-a2e0-b24fbd2a533d\" (UID: \"bafac8d6-5853-4f41-a2e0-b24fbd2a533d\") " Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.433376 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bafac8d6-5853-4f41-a2e0-b24fbd2a533d-bundle\") pod \"bafac8d6-5853-4f41-a2e0-b24fbd2a533d\" (UID: \"bafac8d6-5853-4f41-a2e0-b24fbd2a533d\") " Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.433429 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx4nv\" (UniqueName: \"kubernetes.io/projected/bafac8d6-5853-4f41-a2e0-b24fbd2a533d-kube-api-access-vx4nv\") pod \"bafac8d6-5853-4f41-a2e0-b24fbd2a533d\" (UID: \"bafac8d6-5853-4f41-a2e0-b24fbd2a533d\") " Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.435846 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bafac8d6-5853-4f41-a2e0-b24fbd2a533d-bundle" (OuterVolumeSpecName: "bundle") pod "bafac8d6-5853-4f41-a2e0-b24fbd2a533d" (UID: "bafac8d6-5853-4f41-a2e0-b24fbd2a533d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.442265 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bafac8d6-5853-4f41-a2e0-b24fbd2a533d-kube-api-access-vx4nv" (OuterVolumeSpecName: "kube-api-access-vx4nv") pod "bafac8d6-5853-4f41-a2e0-b24fbd2a533d" (UID: "bafac8d6-5853-4f41-a2e0-b24fbd2a533d"). InnerVolumeSpecName "kube-api-access-vx4nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.450504 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bafac8d6-5853-4f41-a2e0-b24fbd2a533d-util" (OuterVolumeSpecName: "util") pod "bafac8d6-5853-4f41-a2e0-b24fbd2a533d" (UID: "bafac8d6-5853-4f41-a2e0-b24fbd2a533d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.534846 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bafac8d6-5853-4f41-a2e0-b24fbd2a533d-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.534884 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bafac8d6-5853-4f41-a2e0-b24fbd2a533d-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.534909 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx4nv\" (UniqueName: \"kubernetes.io/projected/bafac8d6-5853-4f41-a2e0-b24fbd2a533d-kube-api-access-vx4nv\") on node \"crc\" DevicePath \"\"" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.764613 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h5cb9"] Jan 30 21:59:23 crc kubenswrapper[4869]: E0130 21:59:23.765017 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bafac8d6-5853-4f41-a2e0-b24fbd2a533d" containerName="extract" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.765046 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bafac8d6-5853-4f41-a2e0-b24fbd2a533d" containerName="extract" Jan 30 21:59:23 crc kubenswrapper[4869]: E0130 21:59:23.765084 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bafac8d6-5853-4f41-a2e0-b24fbd2a533d" containerName="util" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.765098 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bafac8d6-5853-4f41-a2e0-b24fbd2a533d" containerName="util" Jan 30 21:59:23 crc kubenswrapper[4869]: E0130 21:59:23.765117 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bafac8d6-5853-4f41-a2e0-b24fbd2a533d" containerName="pull" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.765130 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bafac8d6-5853-4f41-a2e0-b24fbd2a533d" containerName="pull" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.765312 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bafac8d6-5853-4f41-a2e0-b24fbd2a533d" containerName="extract" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.767322 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h5cb9" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.774702 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h5cb9"] Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.837858 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d562d112-e57f-4a59-93b7-b93a602d65d1-utilities\") pod \"certified-operators-h5cb9\" (UID: \"d562d112-e57f-4a59-93b7-b93a602d65d1\") " pod="openshift-marketplace/certified-operators-h5cb9" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.837968 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnsfm\" (UniqueName: \"kubernetes.io/projected/d562d112-e57f-4a59-93b7-b93a602d65d1-kube-api-access-jnsfm\") pod \"certified-operators-h5cb9\" (UID: \"d562d112-e57f-4a59-93b7-b93a602d65d1\") " pod="openshift-marketplace/certified-operators-h5cb9" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.838003 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d562d112-e57f-4a59-93b7-b93a602d65d1-catalog-content\") pod \"certified-operators-h5cb9\" (UID: \"d562d112-e57f-4a59-93b7-b93a602d65d1\") " pod="openshift-marketplace/certified-operators-h5cb9" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.939744 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d562d112-e57f-4a59-93b7-b93a602d65d1-utilities\") pod \"certified-operators-h5cb9\" (UID: \"d562d112-e57f-4a59-93b7-b93a602d65d1\") " pod="openshift-marketplace/certified-operators-h5cb9" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.939838 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnsfm\" (UniqueName: \"kubernetes.io/projected/d562d112-e57f-4a59-93b7-b93a602d65d1-kube-api-access-jnsfm\") pod \"certified-operators-h5cb9\" (UID: \"d562d112-e57f-4a59-93b7-b93a602d65d1\") " pod="openshift-marketplace/certified-operators-h5cb9" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.939995 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d562d112-e57f-4a59-93b7-b93a602d65d1-catalog-content\") pod \"certified-operators-h5cb9\" (UID: \"d562d112-e57f-4a59-93b7-b93a602d65d1\") " pod="openshift-marketplace/certified-operators-h5cb9" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.940304 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d562d112-e57f-4a59-93b7-b93a602d65d1-utilities\") pod \"certified-operators-h5cb9\" (UID: \"d562d112-e57f-4a59-93b7-b93a602d65d1\") " pod="openshift-marketplace/certified-operators-h5cb9" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.940446 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d562d112-e57f-4a59-93b7-b93a602d65d1-catalog-content\") pod \"certified-operators-h5cb9\" (UID: \"d562d112-e57f-4a59-93b7-b93a602d65d1\") " pod="openshift-marketplace/certified-operators-h5cb9" Jan 30 21:59:23 crc kubenswrapper[4869]: I0130 21:59:23.958052 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnsfm\" (UniqueName: \"kubernetes.io/projected/d562d112-e57f-4a59-93b7-b93a602d65d1-kube-api-access-jnsfm\") pod \"certified-operators-h5cb9\" (UID: \"d562d112-e57f-4a59-93b7-b93a602d65d1\") " pod="openshift-marketplace/certified-operators-h5cb9" Jan 30 21:59:24 crc kubenswrapper[4869]: I0130 21:59:24.020888 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt" event={"ID":"bafac8d6-5853-4f41-a2e0-b24fbd2a533d","Type":"ContainerDied","Data":"c9ccf581ff1af12487f78a961665d74f8902da83517b4be001dbae59686b30d3"} Jan 30 21:59:24 crc kubenswrapper[4869]: I0130 21:59:24.020968 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9ccf581ff1af12487f78a961665d74f8902da83517b4be001dbae59686b30d3" Jan 30 21:59:24 crc kubenswrapper[4869]: I0130 21:59:24.021554 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt" Jan 30 21:59:24 crc kubenswrapper[4869]: I0130 21:59:24.094439 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h5cb9" Jan 30 21:59:24 crc kubenswrapper[4869]: I0130 21:59:24.547965 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h5cb9"] Jan 30 21:59:25 crc kubenswrapper[4869]: I0130 21:59:25.029742 4869 generic.go:334] "Generic (PLEG): container finished" podID="d562d112-e57f-4a59-93b7-b93a602d65d1" containerID="4d51a7491b0d53023d35c1ae9e6cda14eb086ca8bb359c83e8bd6f7345f74d3c" exitCode=0 Jan 30 21:59:25 crc kubenswrapper[4869]: I0130 21:59:25.029810 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5cb9" event={"ID":"d562d112-e57f-4a59-93b7-b93a602d65d1","Type":"ContainerDied","Data":"4d51a7491b0d53023d35c1ae9e6cda14eb086ca8bb359c83e8bd6f7345f74d3c"} Jan 30 21:59:25 crc kubenswrapper[4869]: I0130 21:59:25.030051 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5cb9" event={"ID":"d562d112-e57f-4a59-93b7-b93a602d65d1","Type":"ContainerStarted","Data":"6a9e037ca8c91ff9d78c3c7ee44973b2e799231bd253b4974c29c25a4560f2c2"} Jan 30 21:59:26 crc kubenswrapper[4869]: I0130 21:59:26.041997 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5cb9" event={"ID":"d562d112-e57f-4a59-93b7-b93a602d65d1","Type":"ContainerStarted","Data":"abe0eceeec294a79b099489083ba6b7eaec8c124f99c5ef48c00f1dcca667cf1"} Jan 30 21:59:27 crc kubenswrapper[4869]: I0130 21:59:27.053221 4869 generic.go:334] "Generic (PLEG): container finished" podID="d562d112-e57f-4a59-93b7-b93a602d65d1" containerID="abe0eceeec294a79b099489083ba6b7eaec8c124f99c5ef48c00f1dcca667cf1" exitCode=0 Jan 30 21:59:27 crc kubenswrapper[4869]: I0130 21:59:27.053314 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5cb9" event={"ID":"d562d112-e57f-4a59-93b7-b93a602d65d1","Type":"ContainerDied","Data":"abe0eceeec294a79b099489083ba6b7eaec8c124f99c5ef48c00f1dcca667cf1"} Jan 30 21:59:28 crc kubenswrapper[4869]: I0130 21:59:28.061857 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5cb9" event={"ID":"d562d112-e57f-4a59-93b7-b93a602d65d1","Type":"ContainerStarted","Data":"fb7453937f3965bd477e56f0509c546e9084f70cc257cb9a61cdb0b56f8a8ce8"} Jan 30 21:59:28 crc kubenswrapper[4869]: I0130 21:59:28.101416 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h5cb9" podStartSLOduration=2.459908633 podStartE2EDuration="5.101398195s" podCreationTimestamp="2026-01-30 21:59:23 +0000 UTC" firstStartedPulling="2026-01-30 21:59:25.032543913 +0000 UTC m=+965.918301938" lastFinishedPulling="2026-01-30 21:59:27.674033475 +0000 UTC m=+968.559791500" observedRunningTime="2026-01-30 21:59:28.099758314 +0000 UTC m=+968.985516349" watchObservedRunningTime="2026-01-30 21:59:28.101398195 +0000 UTC m=+968.987156220" Jan 30 21:59:28 crc kubenswrapper[4869]: I0130 21:59:28.961441 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xlw7j"] Jan 30 21:59:28 crc kubenswrapper[4869]: I0130 21:59:28.962571 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xlw7j" Jan 30 21:59:28 crc kubenswrapper[4869]: I0130 21:59:28.969671 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xlw7j"] Jan 30 21:59:29 crc kubenswrapper[4869]: I0130 21:59:29.121917 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68b2a67-0222-4d82-bc30-a93ee02d2c7d-catalog-content\") pod \"redhat-marketplace-xlw7j\" (UID: \"b68b2a67-0222-4d82-bc30-a93ee02d2c7d\") " pod="openshift-marketplace/redhat-marketplace-xlw7j" Jan 30 21:59:29 crc kubenswrapper[4869]: I0130 21:59:29.121969 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c54s\" (UniqueName: \"kubernetes.io/projected/b68b2a67-0222-4d82-bc30-a93ee02d2c7d-kube-api-access-6c54s\") pod \"redhat-marketplace-xlw7j\" (UID: \"b68b2a67-0222-4d82-bc30-a93ee02d2c7d\") " pod="openshift-marketplace/redhat-marketplace-xlw7j" Jan 30 21:59:29 crc kubenswrapper[4869]: I0130 21:59:29.122017 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68b2a67-0222-4d82-bc30-a93ee02d2c7d-utilities\") pod \"redhat-marketplace-xlw7j\" (UID: \"b68b2a67-0222-4d82-bc30-a93ee02d2c7d\") " pod="openshift-marketplace/redhat-marketplace-xlw7j" Jan 30 21:59:29 crc kubenswrapper[4869]: I0130 21:59:29.223680 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68b2a67-0222-4d82-bc30-a93ee02d2c7d-catalog-content\") pod \"redhat-marketplace-xlw7j\" (UID: \"b68b2a67-0222-4d82-bc30-a93ee02d2c7d\") " pod="openshift-marketplace/redhat-marketplace-xlw7j" Jan 30 21:59:29 crc kubenswrapper[4869]: I0130 21:59:29.223748 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c54s\" (UniqueName: \"kubernetes.io/projected/b68b2a67-0222-4d82-bc30-a93ee02d2c7d-kube-api-access-6c54s\") pod \"redhat-marketplace-xlw7j\" (UID: \"b68b2a67-0222-4d82-bc30-a93ee02d2c7d\") " pod="openshift-marketplace/redhat-marketplace-xlw7j" Jan 30 21:59:29 crc kubenswrapper[4869]: I0130 21:59:29.223798 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68b2a67-0222-4d82-bc30-a93ee02d2c7d-utilities\") pod \"redhat-marketplace-xlw7j\" (UID: \"b68b2a67-0222-4d82-bc30-a93ee02d2c7d\") " pod="openshift-marketplace/redhat-marketplace-xlw7j" Jan 30 21:59:29 crc kubenswrapper[4869]: I0130 21:59:29.224270 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68b2a67-0222-4d82-bc30-a93ee02d2c7d-utilities\") pod \"redhat-marketplace-xlw7j\" (UID: \"b68b2a67-0222-4d82-bc30-a93ee02d2c7d\") " pod="openshift-marketplace/redhat-marketplace-xlw7j" Jan 30 21:59:29 crc kubenswrapper[4869]: I0130 21:59:29.224880 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68b2a67-0222-4d82-bc30-a93ee02d2c7d-catalog-content\") pod \"redhat-marketplace-xlw7j\" (UID: \"b68b2a67-0222-4d82-bc30-a93ee02d2c7d\") " pod="openshift-marketplace/redhat-marketplace-xlw7j" Jan 30 21:59:29 crc kubenswrapper[4869]: I0130 21:59:29.254073 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c54s\" (UniqueName: \"kubernetes.io/projected/b68b2a67-0222-4d82-bc30-a93ee02d2c7d-kube-api-access-6c54s\") pod \"redhat-marketplace-xlw7j\" (UID: \"b68b2a67-0222-4d82-bc30-a93ee02d2c7d\") " pod="openshift-marketplace/redhat-marketplace-xlw7j" Jan 30 21:59:29 crc kubenswrapper[4869]: I0130 21:59:29.297695 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xlw7j" Jan 30 21:59:29 crc kubenswrapper[4869]: I0130 21:59:29.741595 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xlw7j"] Jan 30 21:59:30 crc kubenswrapper[4869]: I0130 21:59:30.075459 4869 generic.go:334] "Generic (PLEG): container finished" podID="b68b2a67-0222-4d82-bc30-a93ee02d2c7d" containerID="7968e9daf3ab3ea0878f6349332fb28df2bb21673e33690992954c3d50b2dc75" exitCode=0 Jan 30 21:59:30 crc kubenswrapper[4869]: I0130 21:59:30.075508 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlw7j" event={"ID":"b68b2a67-0222-4d82-bc30-a93ee02d2c7d","Type":"ContainerDied","Data":"7968e9daf3ab3ea0878f6349332fb28df2bb21673e33690992954c3d50b2dc75"} Jan 30 21:59:30 crc kubenswrapper[4869]: I0130 21:59:30.075548 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlw7j" event={"ID":"b68b2a67-0222-4d82-bc30-a93ee02d2c7d","Type":"ContainerStarted","Data":"37c07e0028881bd8c6bb91d307c50e644100ef7c789f74a0ede352bebfc636cc"} Jan 30 21:59:30 crc kubenswrapper[4869]: I0130 21:59:30.690832 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t"] Jan 30 21:59:30 crc kubenswrapper[4869]: I0130 21:59:30.692028 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t" Jan 30 21:59:30 crc kubenswrapper[4869]: I0130 21:59:30.699410 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-2xkkt" Jan 30 21:59:30 crc kubenswrapper[4869]: I0130 21:59:30.714592 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t"] Jan 30 21:59:30 crc kubenswrapper[4869]: I0130 21:59:30.842542 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj52n\" (UniqueName: \"kubernetes.io/projected/20d7cc6e-b969-4768-978d-534adef89f4f-kube-api-access-zj52n\") pod \"rabbitmq-cluster-operator-779fc9694b-26p8t\" (UID: \"20d7cc6e-b969-4768-978d-534adef89f4f\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t" Jan 30 21:59:30 crc kubenswrapper[4869]: I0130 21:59:30.944132 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj52n\" (UniqueName: \"kubernetes.io/projected/20d7cc6e-b969-4768-978d-534adef89f4f-kube-api-access-zj52n\") pod \"rabbitmq-cluster-operator-779fc9694b-26p8t\" (UID: \"20d7cc6e-b969-4768-978d-534adef89f4f\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t" Jan 30 21:59:30 crc kubenswrapper[4869]: I0130 21:59:30.968209 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj52n\" (UniqueName: \"kubernetes.io/projected/20d7cc6e-b969-4768-978d-534adef89f4f-kube-api-access-zj52n\") pod \"rabbitmq-cluster-operator-779fc9694b-26p8t\" (UID: \"20d7cc6e-b969-4768-978d-534adef89f4f\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t" Jan 30 21:59:31 crc kubenswrapper[4869]: I0130 21:59:31.017155 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t" Jan 30 21:59:31 crc kubenswrapper[4869]: I0130 21:59:31.084740 4869 generic.go:334] "Generic (PLEG): container finished" podID="b68b2a67-0222-4d82-bc30-a93ee02d2c7d" containerID="069e86382a9ff3221c58f12ce236c479644d19e93038ec1fc87a81c7b7ea1721" exitCode=0 Jan 30 21:59:31 crc kubenswrapper[4869]: I0130 21:59:31.084788 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlw7j" event={"ID":"b68b2a67-0222-4d82-bc30-a93ee02d2c7d","Type":"ContainerDied","Data":"069e86382a9ff3221c58f12ce236c479644d19e93038ec1fc87a81c7b7ea1721"} Jan 30 21:59:31 crc kubenswrapper[4869]: I0130 21:59:31.380856 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t"] Jan 30 21:59:32 crc kubenswrapper[4869]: I0130 21:59:32.092350 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlw7j" event={"ID":"b68b2a67-0222-4d82-bc30-a93ee02d2c7d","Type":"ContainerStarted","Data":"ac7b116ba6a3e4ae7a489c868adb781159aa63ba1080cdf276be1105494fe860"} Jan 30 21:59:32 crc kubenswrapper[4869]: I0130 21:59:32.093932 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t" event={"ID":"20d7cc6e-b969-4768-978d-534adef89f4f","Type":"ContainerStarted","Data":"2c4749c6b00b7c28de5f27cccaf697d8fe11759570fc014ceba72eca750527ce"} Jan 30 21:59:33 crc kubenswrapper[4869]: I0130 21:59:33.126280 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xlw7j" podStartSLOduration=3.663393054 podStartE2EDuration="5.126261553s" podCreationTimestamp="2026-01-30 21:59:28 +0000 UTC" firstStartedPulling="2026-01-30 21:59:30.077278507 +0000 UTC m=+970.963036532" lastFinishedPulling="2026-01-30 21:59:31.540147006 +0000 UTC m=+972.425905031" observedRunningTime="2026-01-30 21:59:33.12107764 +0000 UTC m=+974.006835665" watchObservedRunningTime="2026-01-30 21:59:33.126261553 +0000 UTC m=+974.012019578" Jan 30 21:59:34 crc kubenswrapper[4869]: I0130 21:59:34.096229 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h5cb9" Jan 30 21:59:34 crc kubenswrapper[4869]: I0130 21:59:34.096791 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h5cb9" Jan 30 21:59:34 crc kubenswrapper[4869]: I0130 21:59:34.146068 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h5cb9" Jan 30 21:59:35 crc kubenswrapper[4869]: I0130 21:59:35.159045 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h5cb9" Jan 30 21:59:36 crc kubenswrapper[4869]: I0130 21:59:36.125084 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t" event={"ID":"20d7cc6e-b969-4768-978d-534adef89f4f","Type":"ContainerStarted","Data":"f58320dfca0855f69000bfd485396cf9d75828010486072e1ed4b7c66ac53c9c"} Jan 30 21:59:36 crc kubenswrapper[4869]: I0130 21:59:36.146551 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t" podStartSLOduration=2.285785146 podStartE2EDuration="6.146524695s" podCreationTimestamp="2026-01-30 21:59:30 +0000 UTC" firstStartedPulling="2026-01-30 21:59:31.388780171 +0000 UTC m=+972.274538196" lastFinishedPulling="2026-01-30 21:59:35.24951972 +0000 UTC m=+976.135277745" observedRunningTime="2026-01-30 21:59:36.137938486 +0000 UTC m=+977.023696511" watchObservedRunningTime="2026-01-30 21:59:36.146524695 +0000 UTC m=+977.032282740" Jan 30 21:59:36 crc kubenswrapper[4869]: I0130 21:59:36.750022 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h5cb9"] Jan 30 21:59:37 crc kubenswrapper[4869]: I0130 21:59:37.132047 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h5cb9" podUID="d562d112-e57f-4a59-93b7-b93a602d65d1" containerName="registry-server" containerID="cri-o://fb7453937f3965bd477e56f0509c546e9084f70cc257cb9a61cdb0b56f8a8ce8" gracePeriod=2 Jan 30 21:59:37 crc kubenswrapper[4869]: I0130 21:59:37.598138 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h5cb9" Jan 30 21:59:37 crc kubenswrapper[4869]: I0130 21:59:37.737437 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnsfm\" (UniqueName: \"kubernetes.io/projected/d562d112-e57f-4a59-93b7-b93a602d65d1-kube-api-access-jnsfm\") pod \"d562d112-e57f-4a59-93b7-b93a602d65d1\" (UID: \"d562d112-e57f-4a59-93b7-b93a602d65d1\") " Jan 30 21:59:37 crc kubenswrapper[4869]: I0130 21:59:37.737517 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d562d112-e57f-4a59-93b7-b93a602d65d1-utilities\") pod \"d562d112-e57f-4a59-93b7-b93a602d65d1\" (UID: \"d562d112-e57f-4a59-93b7-b93a602d65d1\") " Jan 30 21:59:37 crc kubenswrapper[4869]: I0130 21:59:37.737551 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d562d112-e57f-4a59-93b7-b93a602d65d1-catalog-content\") pod \"d562d112-e57f-4a59-93b7-b93a602d65d1\" (UID: \"d562d112-e57f-4a59-93b7-b93a602d65d1\") " Jan 30 21:59:37 crc kubenswrapper[4869]: I0130 21:59:37.738301 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d562d112-e57f-4a59-93b7-b93a602d65d1-utilities" (OuterVolumeSpecName: "utilities") pod "d562d112-e57f-4a59-93b7-b93a602d65d1" (UID: "d562d112-e57f-4a59-93b7-b93a602d65d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:59:37 crc kubenswrapper[4869]: I0130 21:59:37.746130 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d562d112-e57f-4a59-93b7-b93a602d65d1-kube-api-access-jnsfm" (OuterVolumeSpecName: "kube-api-access-jnsfm") pod "d562d112-e57f-4a59-93b7-b93a602d65d1" (UID: "d562d112-e57f-4a59-93b7-b93a602d65d1"). InnerVolumeSpecName "kube-api-access-jnsfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:59:37 crc kubenswrapper[4869]: I0130 21:59:37.777828 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d562d112-e57f-4a59-93b7-b93a602d65d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d562d112-e57f-4a59-93b7-b93a602d65d1" (UID: "d562d112-e57f-4a59-93b7-b93a602d65d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:59:37 crc kubenswrapper[4869]: I0130 21:59:37.838797 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnsfm\" (UniqueName: \"kubernetes.io/projected/d562d112-e57f-4a59-93b7-b93a602d65d1-kube-api-access-jnsfm\") on node \"crc\" DevicePath \"\"" Jan 30 21:59:37 crc kubenswrapper[4869]: I0130 21:59:37.838834 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d562d112-e57f-4a59-93b7-b93a602d65d1-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:59:37 crc kubenswrapper[4869]: I0130 21:59:37.838844 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d562d112-e57f-4a59-93b7-b93a602d65d1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:59:38 crc kubenswrapper[4869]: I0130 21:59:38.140361 4869 generic.go:334] "Generic (PLEG): container finished" podID="d562d112-e57f-4a59-93b7-b93a602d65d1" containerID="fb7453937f3965bd477e56f0509c546e9084f70cc257cb9a61cdb0b56f8a8ce8" exitCode=0 Jan 30 21:59:38 crc kubenswrapper[4869]: I0130 21:59:38.140404 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5cb9" event={"ID":"d562d112-e57f-4a59-93b7-b93a602d65d1","Type":"ContainerDied","Data":"fb7453937f3965bd477e56f0509c546e9084f70cc257cb9a61cdb0b56f8a8ce8"} Jan 30 21:59:38 crc kubenswrapper[4869]: I0130 21:59:38.140434 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5cb9" event={"ID":"d562d112-e57f-4a59-93b7-b93a602d65d1","Type":"ContainerDied","Data":"6a9e037ca8c91ff9d78c3c7ee44973b2e799231bd253b4974c29c25a4560f2c2"} Jan 30 21:59:38 crc kubenswrapper[4869]: I0130 21:59:38.140406 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h5cb9" Jan 30 21:59:38 crc kubenswrapper[4869]: I0130 21:59:38.140458 4869 scope.go:117] "RemoveContainer" containerID="fb7453937f3965bd477e56f0509c546e9084f70cc257cb9a61cdb0b56f8a8ce8" Jan 30 21:59:38 crc kubenswrapper[4869]: I0130 21:59:38.156124 4869 scope.go:117] "RemoveContainer" containerID="abe0eceeec294a79b099489083ba6b7eaec8c124f99c5ef48c00f1dcca667cf1" Jan 30 21:59:38 crc kubenswrapper[4869]: I0130 21:59:38.164941 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h5cb9"] Jan 30 21:59:38 crc kubenswrapper[4869]: I0130 21:59:38.168777 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h5cb9"] Jan 30 21:59:38 crc kubenswrapper[4869]: I0130 21:59:38.176123 4869 scope.go:117] "RemoveContainer" containerID="4d51a7491b0d53023d35c1ae9e6cda14eb086ca8bb359c83e8bd6f7345f74d3c" Jan 30 21:59:38 crc kubenswrapper[4869]: I0130 21:59:38.196085 4869 scope.go:117] "RemoveContainer" containerID="fb7453937f3965bd477e56f0509c546e9084f70cc257cb9a61cdb0b56f8a8ce8" Jan 30 21:59:38 crc kubenswrapper[4869]: E0130 21:59:38.196574 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb7453937f3965bd477e56f0509c546e9084f70cc257cb9a61cdb0b56f8a8ce8\": container with ID starting with fb7453937f3965bd477e56f0509c546e9084f70cc257cb9a61cdb0b56f8a8ce8 not found: ID does not exist" containerID="fb7453937f3965bd477e56f0509c546e9084f70cc257cb9a61cdb0b56f8a8ce8" Jan 30 21:59:38 crc kubenswrapper[4869]: I0130 21:59:38.196602 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb7453937f3965bd477e56f0509c546e9084f70cc257cb9a61cdb0b56f8a8ce8"} err="failed to get container status \"fb7453937f3965bd477e56f0509c546e9084f70cc257cb9a61cdb0b56f8a8ce8\": rpc error: code = NotFound desc = could not find container \"fb7453937f3965bd477e56f0509c546e9084f70cc257cb9a61cdb0b56f8a8ce8\": container with ID starting with fb7453937f3965bd477e56f0509c546e9084f70cc257cb9a61cdb0b56f8a8ce8 not found: ID does not exist" Jan 30 21:59:38 crc kubenswrapper[4869]: I0130 21:59:38.196621 4869 scope.go:117] "RemoveContainer" containerID="abe0eceeec294a79b099489083ba6b7eaec8c124f99c5ef48c00f1dcca667cf1" Jan 30 21:59:38 crc kubenswrapper[4869]: E0130 21:59:38.196928 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abe0eceeec294a79b099489083ba6b7eaec8c124f99c5ef48c00f1dcca667cf1\": container with ID starting with abe0eceeec294a79b099489083ba6b7eaec8c124f99c5ef48c00f1dcca667cf1 not found: ID does not exist" containerID="abe0eceeec294a79b099489083ba6b7eaec8c124f99c5ef48c00f1dcca667cf1" Jan 30 21:59:38 crc kubenswrapper[4869]: I0130 21:59:38.196955 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe0eceeec294a79b099489083ba6b7eaec8c124f99c5ef48c00f1dcca667cf1"} err="failed to get container status \"abe0eceeec294a79b099489083ba6b7eaec8c124f99c5ef48c00f1dcca667cf1\": rpc error: code = NotFound desc = could not find container \"abe0eceeec294a79b099489083ba6b7eaec8c124f99c5ef48c00f1dcca667cf1\": container with ID starting with abe0eceeec294a79b099489083ba6b7eaec8c124f99c5ef48c00f1dcca667cf1 not found: ID does not exist" Jan 30 21:59:38 crc kubenswrapper[4869]: I0130 21:59:38.196970 4869 scope.go:117] "RemoveContainer" containerID="4d51a7491b0d53023d35c1ae9e6cda14eb086ca8bb359c83e8bd6f7345f74d3c" Jan 30 21:59:38 crc kubenswrapper[4869]: E0130 21:59:38.197341 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d51a7491b0d53023d35c1ae9e6cda14eb086ca8bb359c83e8bd6f7345f74d3c\": container with ID starting with 4d51a7491b0d53023d35c1ae9e6cda14eb086ca8bb359c83e8bd6f7345f74d3c not found: ID does not exist" containerID="4d51a7491b0d53023d35c1ae9e6cda14eb086ca8bb359c83e8bd6f7345f74d3c" Jan 30 21:59:38 crc kubenswrapper[4869]: I0130 21:59:38.197388 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d51a7491b0d53023d35c1ae9e6cda14eb086ca8bb359c83e8bd6f7345f74d3c"} err="failed to get container status \"4d51a7491b0d53023d35c1ae9e6cda14eb086ca8bb359c83e8bd6f7345f74d3c\": rpc error: code = NotFound desc = could not find container \"4d51a7491b0d53023d35c1ae9e6cda14eb086ca8bb359c83e8bd6f7345f74d3c\": container with ID starting with 4d51a7491b0d53023d35c1ae9e6cda14eb086ca8bb359c83e8bd6f7345f74d3c not found: ID does not exist" Jan 30 21:59:39 crc kubenswrapper[4869]: I0130 21:59:39.297948 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xlw7j" Jan 30 21:59:39 crc kubenswrapper[4869]: I0130 21:59:39.298485 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xlw7j" Jan 30 21:59:39 crc kubenswrapper[4869]: I0130 21:59:39.348573 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xlw7j" Jan 30 21:59:39 crc kubenswrapper[4869]: I0130 21:59:39.899809 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d562d112-e57f-4a59-93b7-b93a602d65d1" path="/var/lib/kubelet/pods/d562d112-e57f-4a59-93b7-b93a602d65d1/volumes" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.191118 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xlw7j" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.313246 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 30 21:59:40 crc kubenswrapper[4869]: E0130 21:59:40.313580 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d562d112-e57f-4a59-93b7-b93a602d65d1" containerName="extract-content" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.313594 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d562d112-e57f-4a59-93b7-b93a602d65d1" containerName="extract-content" Jan 30 21:59:40 crc kubenswrapper[4869]: E0130 21:59:40.313616 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d562d112-e57f-4a59-93b7-b93a602d65d1" containerName="registry-server" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.313624 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d562d112-e57f-4a59-93b7-b93a602d65d1" containerName="registry-server" Jan 30 21:59:40 crc kubenswrapper[4869]: E0130 21:59:40.313632 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d562d112-e57f-4a59-93b7-b93a602d65d1" containerName="extract-utilities" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.313639 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d562d112-e57f-4a59-93b7-b93a602d65d1" containerName="extract-utilities" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.313741 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d562d112-e57f-4a59-93b7-b93a602d65d1" containerName="registry-server" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.314342 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.316112 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"rabbitmq-server-conf" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.316333 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"rabbitmq-default-user" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.316383 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"rabbitmq-erlang-cookie" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.316632 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"rabbitmq-server-dockercfg-j2wsn" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.316752 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"rabbitmq-plugins-conf" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.322258 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.473307 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/111e74f4-fd99-4f7d-8057-43794129795f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.473382 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/111e74f4-fd99-4f7d-8057-43794129795f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.473451 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0118f339-f278-4491-b96c-705ba304b2b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0118f339-f278-4491-b96c-705ba304b2b1\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.473472 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/111e74f4-fd99-4f7d-8057-43794129795f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.473492 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/111e74f4-fd99-4f7d-8057-43794129795f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.473511 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/111e74f4-fd99-4f7d-8057-43794129795f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.473526 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/111e74f4-fd99-4f7d-8057-43794129795f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.473605 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srwkd\" (UniqueName: \"kubernetes.io/projected/111e74f4-fd99-4f7d-8057-43794129795f-kube-api-access-srwkd\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.574708 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/111e74f4-fd99-4f7d-8057-43794129795f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.574773 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/111e74f4-fd99-4f7d-8057-43794129795f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.574923 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0118f339-f278-4491-b96c-705ba304b2b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0118f339-f278-4491-b96c-705ba304b2b1\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.574965 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/111e74f4-fd99-4f7d-8057-43794129795f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.575003 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/111e74f4-fd99-4f7d-8057-43794129795f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.575035 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/111e74f4-fd99-4f7d-8057-43794129795f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.575058 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/111e74f4-fd99-4f7d-8057-43794129795f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.575093 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srwkd\" (UniqueName: \"kubernetes.io/projected/111e74f4-fd99-4f7d-8057-43794129795f-kube-api-access-srwkd\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.575511 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/111e74f4-fd99-4f7d-8057-43794129795f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.575575 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/111e74f4-fd99-4f7d-8057-43794129795f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.576368 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/111e74f4-fd99-4f7d-8057-43794129795f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.585114 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/111e74f4-fd99-4f7d-8057-43794129795f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.585287 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/111e74f4-fd99-4f7d-8057-43794129795f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.585289 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/111e74f4-fd99-4f7d-8057-43794129795f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.585768 4869 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.585795 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0118f339-f278-4491-b96c-705ba304b2b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0118f339-f278-4491-b96c-705ba304b2b1\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5046b8e7861477b62b0d398479dc4fa14a1196d6672942f2f486d7bf94041838/globalmount\"" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.594077 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srwkd\" (UniqueName: \"kubernetes.io/projected/111e74f4-fd99-4f7d-8057-43794129795f-kube-api-access-srwkd\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.624531 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0118f339-f278-4491-b96c-705ba304b2b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0118f339-f278-4491-b96c-705ba304b2b1\") pod \"rabbitmq-server-0\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:40 crc kubenswrapper[4869]: I0130 21:59:40.631990 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 21:59:41 crc kubenswrapper[4869]: I0130 21:59:41.027818 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 30 21:59:41 crc kubenswrapper[4869]: I0130 21:59:41.161287 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"111e74f4-fd99-4f7d-8057-43794129795f","Type":"ContainerStarted","Data":"42616fa0bb82784791ebfe85956162cfc78ae3ea906e73a8bbbb5bbf33fb67a5"} Jan 30 21:59:41 crc kubenswrapper[4869]: I0130 21:59:41.558648 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hhzgf"] Jan 30 21:59:41 crc kubenswrapper[4869]: I0130 21:59:41.561176 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hhzgf" Jan 30 21:59:41 crc kubenswrapper[4869]: I0130 21:59:41.577038 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hhzgf"] Jan 30 21:59:41 crc kubenswrapper[4869]: I0130 21:59:41.691374 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5rbr\" (UniqueName: \"kubernetes.io/projected/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2-kube-api-access-v5rbr\") pod \"community-operators-hhzgf\" (UID: \"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2\") " pod="openshift-marketplace/community-operators-hhzgf" Jan 30 21:59:41 crc kubenswrapper[4869]: I0130 21:59:41.691495 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2-utilities\") pod \"community-operators-hhzgf\" (UID: \"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2\") " pod="openshift-marketplace/community-operators-hhzgf" Jan 30 21:59:41 crc kubenswrapper[4869]: I0130 21:59:41.691653 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2-catalog-content\") pod \"community-operators-hhzgf\" (UID: \"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2\") " pod="openshift-marketplace/community-operators-hhzgf" Jan 30 21:59:41 crc kubenswrapper[4869]: I0130 21:59:41.792461 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2-utilities\") pod \"community-operators-hhzgf\" (UID: \"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2\") " pod="openshift-marketplace/community-operators-hhzgf" Jan 30 21:59:41 crc kubenswrapper[4869]: I0130 21:59:41.792517 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2-catalog-content\") pod \"community-operators-hhzgf\" (UID: \"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2\") " pod="openshift-marketplace/community-operators-hhzgf" Jan 30 21:59:41 crc kubenswrapper[4869]: I0130 21:59:41.792564 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5rbr\" (UniqueName: \"kubernetes.io/projected/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2-kube-api-access-v5rbr\") pod \"community-operators-hhzgf\" (UID: \"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2\") " pod="openshift-marketplace/community-operators-hhzgf" Jan 30 21:59:41 crc kubenswrapper[4869]: I0130 21:59:41.793129 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2-utilities\") pod \"community-operators-hhzgf\" (UID: \"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2\") " pod="openshift-marketplace/community-operators-hhzgf" Jan 30 21:59:41 crc kubenswrapper[4869]: I0130 21:59:41.793161 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2-catalog-content\") pod \"community-operators-hhzgf\" (UID: \"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2\") " pod="openshift-marketplace/community-operators-hhzgf" Jan 30 21:59:41 crc kubenswrapper[4869]: I0130 21:59:41.811237 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5rbr\" (UniqueName: \"kubernetes.io/projected/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2-kube-api-access-v5rbr\") pod \"community-operators-hhzgf\" (UID: \"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2\") " pod="openshift-marketplace/community-operators-hhzgf" Jan 30 21:59:41 crc kubenswrapper[4869]: I0130 21:59:41.882747 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hhzgf" Jan 30 21:59:42 crc kubenswrapper[4869]: I0130 21:59:42.360587 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hhzgf"] Jan 30 21:59:42 crc kubenswrapper[4869]: W0130 21:59:42.369842 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0572dcb2_4223_4ee6_ab95_f6a5f3cdf7b2.slice/crio-09e4af0dbdb9830acee4bc2d66bfb893dbede6c5e0d1f0d92120966ac9065b9d WatchSource:0}: Error finding container 09e4af0dbdb9830acee4bc2d66bfb893dbede6c5e0d1f0d92120966ac9065b9d: Status 404 returned error can't find the container with id 09e4af0dbdb9830acee4bc2d66bfb893dbede6c5e0d1f0d92120966ac9065b9d Jan 30 21:59:43 crc kubenswrapper[4869]: I0130 21:59:43.179609 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhzgf" event={"ID":"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2","Type":"ContainerStarted","Data":"09e4af0dbdb9830acee4bc2d66bfb893dbede6c5e0d1f0d92120966ac9065b9d"} Jan 30 21:59:44 crc kubenswrapper[4869]: I0130 21:59:44.188210 4869 generic.go:334] "Generic (PLEG): container finished" podID="0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2" containerID="00747f47d17aae6ae77cd46ca3620a79adb995417a5ee59384271a0728b10ce4" exitCode=0 Jan 30 21:59:44 crc kubenswrapper[4869]: I0130 21:59:44.188270 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhzgf" event={"ID":"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2","Type":"ContainerDied","Data":"00747f47d17aae6ae77cd46ca3620a79adb995417a5ee59384271a0728b10ce4"} Jan 30 21:59:45 crc kubenswrapper[4869]: I0130 21:59:45.750380 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xlw7j"] Jan 30 21:59:45 crc kubenswrapper[4869]: I0130 21:59:45.750655 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xlw7j" podUID="b68b2a67-0222-4d82-bc30-a93ee02d2c7d" containerName="registry-server" containerID="cri-o://ac7b116ba6a3e4ae7a489c868adb781159aa63ba1080cdf276be1105494fe860" gracePeriod=2 Jan 30 21:59:46 crc kubenswrapper[4869]: I0130 21:59:46.202536 4869 generic.go:334] "Generic (PLEG): container finished" podID="b68b2a67-0222-4d82-bc30-a93ee02d2c7d" containerID="ac7b116ba6a3e4ae7a489c868adb781159aa63ba1080cdf276be1105494fe860" exitCode=0 Jan 30 21:59:46 crc kubenswrapper[4869]: I0130 21:59:46.202579 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlw7j" event={"ID":"b68b2a67-0222-4d82-bc30-a93ee02d2c7d","Type":"ContainerDied","Data":"ac7b116ba6a3e4ae7a489c868adb781159aa63ba1080cdf276be1105494fe860"} Jan 30 21:59:46 crc kubenswrapper[4869]: I0130 21:59:46.364795 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8vwt7"] Jan 30 21:59:46 crc kubenswrapper[4869]: I0130 21:59:46.369373 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vwt7" Jan 30 21:59:46 crc kubenswrapper[4869]: I0130 21:59:46.375246 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8vwt7"] Jan 30 21:59:46 crc kubenswrapper[4869]: I0130 21:59:46.460308 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df94940f-3505-4516-94c5-19c8a367e1c8-catalog-content\") pod \"redhat-operators-8vwt7\" (UID: \"df94940f-3505-4516-94c5-19c8a367e1c8\") " pod="openshift-marketplace/redhat-operators-8vwt7" Jan 30 21:59:46 crc kubenswrapper[4869]: I0130 21:59:46.460422 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df94940f-3505-4516-94c5-19c8a367e1c8-utilities\") pod \"redhat-operators-8vwt7\" (UID: \"df94940f-3505-4516-94c5-19c8a367e1c8\") " pod="openshift-marketplace/redhat-operators-8vwt7" Jan 30 21:59:46 crc kubenswrapper[4869]: I0130 21:59:46.460636 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmfth\" (UniqueName: \"kubernetes.io/projected/df94940f-3505-4516-94c5-19c8a367e1c8-kube-api-access-dmfth\") pod \"redhat-operators-8vwt7\" (UID: \"df94940f-3505-4516-94c5-19c8a367e1c8\") " pod="openshift-marketplace/redhat-operators-8vwt7" Jan 30 21:59:46 crc kubenswrapper[4869]: I0130 21:59:46.561861 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df94940f-3505-4516-94c5-19c8a367e1c8-utilities\") pod \"redhat-operators-8vwt7\" (UID: \"df94940f-3505-4516-94c5-19c8a367e1c8\") " pod="openshift-marketplace/redhat-operators-8vwt7" Jan 30 21:59:46 crc kubenswrapper[4869]: I0130 21:59:46.561957 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmfth\" (UniqueName: \"kubernetes.io/projected/df94940f-3505-4516-94c5-19c8a367e1c8-kube-api-access-dmfth\") pod \"redhat-operators-8vwt7\" (UID: \"df94940f-3505-4516-94c5-19c8a367e1c8\") " pod="openshift-marketplace/redhat-operators-8vwt7" Jan 30 21:59:46 crc kubenswrapper[4869]: I0130 21:59:46.562014 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df94940f-3505-4516-94c5-19c8a367e1c8-catalog-content\") pod \"redhat-operators-8vwt7\" (UID: \"df94940f-3505-4516-94c5-19c8a367e1c8\") " pod="openshift-marketplace/redhat-operators-8vwt7" Jan 30 21:59:46 crc kubenswrapper[4869]: I0130 21:59:46.562511 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df94940f-3505-4516-94c5-19c8a367e1c8-catalog-content\") pod \"redhat-operators-8vwt7\" (UID: \"df94940f-3505-4516-94c5-19c8a367e1c8\") " pod="openshift-marketplace/redhat-operators-8vwt7" Jan 30 21:59:46 crc kubenswrapper[4869]: I0130 21:59:46.562733 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df94940f-3505-4516-94c5-19c8a367e1c8-utilities\") pod \"redhat-operators-8vwt7\" (UID: \"df94940f-3505-4516-94c5-19c8a367e1c8\") " pod="openshift-marketplace/redhat-operators-8vwt7" Jan 30 21:59:46 crc kubenswrapper[4869]: I0130 21:59:46.587264 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmfth\" (UniqueName: \"kubernetes.io/projected/df94940f-3505-4516-94c5-19c8a367e1c8-kube-api-access-dmfth\") pod \"redhat-operators-8vwt7\" (UID: \"df94940f-3505-4516-94c5-19c8a367e1c8\") " pod="openshift-marketplace/redhat-operators-8vwt7" Jan 30 21:59:46 crc kubenswrapper[4869]: I0130 21:59:46.699431 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vwt7" Jan 30 21:59:47 crc kubenswrapper[4869]: I0130 21:59:47.584846 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xlw7j" Jan 30 21:59:47 crc kubenswrapper[4869]: I0130 21:59:47.679610 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68b2a67-0222-4d82-bc30-a93ee02d2c7d-utilities\") pod \"b68b2a67-0222-4d82-bc30-a93ee02d2c7d\" (UID: \"b68b2a67-0222-4d82-bc30-a93ee02d2c7d\") " Jan 30 21:59:47 crc kubenswrapper[4869]: I0130 21:59:47.679805 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c54s\" (UniqueName: \"kubernetes.io/projected/b68b2a67-0222-4d82-bc30-a93ee02d2c7d-kube-api-access-6c54s\") pod \"b68b2a67-0222-4d82-bc30-a93ee02d2c7d\" (UID: \"b68b2a67-0222-4d82-bc30-a93ee02d2c7d\") " Jan 30 21:59:47 crc kubenswrapper[4869]: I0130 21:59:47.679886 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68b2a67-0222-4d82-bc30-a93ee02d2c7d-catalog-content\") pod \"b68b2a67-0222-4d82-bc30-a93ee02d2c7d\" (UID: \"b68b2a67-0222-4d82-bc30-a93ee02d2c7d\") " Jan 30 21:59:47 crc kubenswrapper[4869]: I0130 21:59:47.681662 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b68b2a67-0222-4d82-bc30-a93ee02d2c7d-utilities" (OuterVolumeSpecName: "utilities") pod "b68b2a67-0222-4d82-bc30-a93ee02d2c7d" (UID: "b68b2a67-0222-4d82-bc30-a93ee02d2c7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:59:47 crc kubenswrapper[4869]: I0130 21:59:47.701110 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b68b2a67-0222-4d82-bc30-a93ee02d2c7d-kube-api-access-6c54s" (OuterVolumeSpecName: "kube-api-access-6c54s") pod "b68b2a67-0222-4d82-bc30-a93ee02d2c7d" (UID: "b68b2a67-0222-4d82-bc30-a93ee02d2c7d"). InnerVolumeSpecName "kube-api-access-6c54s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:59:47 crc kubenswrapper[4869]: I0130 21:59:47.719738 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b68b2a67-0222-4d82-bc30-a93ee02d2c7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b68b2a67-0222-4d82-bc30-a93ee02d2c7d" (UID: "b68b2a67-0222-4d82-bc30-a93ee02d2c7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:59:47 crc kubenswrapper[4869]: I0130 21:59:47.781564 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68b2a67-0222-4d82-bc30-a93ee02d2c7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:59:47 crc kubenswrapper[4869]: I0130 21:59:47.781612 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c54s\" (UniqueName: \"kubernetes.io/projected/b68b2a67-0222-4d82-bc30-a93ee02d2c7d-kube-api-access-6c54s\") on node \"crc\" DevicePath \"\"" Jan 30 21:59:47 crc kubenswrapper[4869]: I0130 21:59:47.781626 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68b2a67-0222-4d82-bc30-a93ee02d2c7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:59:48 crc kubenswrapper[4869]: I0130 21:59:48.240508 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlw7j" event={"ID":"b68b2a67-0222-4d82-bc30-a93ee02d2c7d","Type":"ContainerDied","Data":"37c07e0028881bd8c6bb91d307c50e644100ef7c789f74a0ede352bebfc636cc"} Jan 30 21:59:48 crc kubenswrapper[4869]: I0130 21:59:48.240588 4869 scope.go:117] "RemoveContainer" containerID="ac7b116ba6a3e4ae7a489c868adb781159aa63ba1080cdf276be1105494fe860" Jan 30 21:59:48 crc kubenswrapper[4869]: I0130 21:59:48.240759 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xlw7j" Jan 30 21:59:48 crc kubenswrapper[4869]: I0130 21:59:48.262748 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xlw7j"] Jan 30 21:59:48 crc kubenswrapper[4869]: I0130 21:59:48.266859 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xlw7j"] Jan 30 21:59:49 crc kubenswrapper[4869]: I0130 21:59:49.021310 4869 scope.go:117] "RemoveContainer" containerID="069e86382a9ff3221c58f12ce236c479644d19e93038ec1fc87a81c7b7ea1721" Jan 30 21:59:49 crc kubenswrapper[4869]: I0130 21:59:49.278817 4869 scope.go:117] "RemoveContainer" containerID="7968e9daf3ab3ea0878f6349332fb28df2bb21673e33690992954c3d50b2dc75" Jan 30 21:59:49 crc kubenswrapper[4869]: I0130 21:59:49.449602 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8vwt7"] Jan 30 21:59:49 crc kubenswrapper[4869]: W0130 21:59:49.457422 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf94940f_3505_4516_94c5_19c8a367e1c8.slice/crio-597221ab3183ee0958cd2ee0f61fb340ed0d1f30609d8a21bc1cc27aaee2b7fa WatchSource:0}: Error finding container 597221ab3183ee0958cd2ee0f61fb340ed0d1f30609d8a21bc1cc27aaee2b7fa: Status 404 returned error can't find the container with id 597221ab3183ee0958cd2ee0f61fb340ed0d1f30609d8a21bc1cc27aaee2b7fa Jan 30 21:59:49 crc kubenswrapper[4869]: I0130 21:59:49.561528 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-9pzmz"] Jan 30 21:59:49 crc kubenswrapper[4869]: E0130 21:59:49.562125 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b68b2a67-0222-4d82-bc30-a93ee02d2c7d" containerName="registry-server" Jan 30 21:59:49 crc kubenswrapper[4869]: I0130 21:59:49.562145 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68b2a67-0222-4d82-bc30-a93ee02d2c7d" containerName="registry-server" Jan 30 21:59:49 crc kubenswrapper[4869]: E0130 21:59:49.562156 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b68b2a67-0222-4d82-bc30-a93ee02d2c7d" containerName="extract-utilities" Jan 30 21:59:49 crc kubenswrapper[4869]: I0130 21:59:49.562162 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68b2a67-0222-4d82-bc30-a93ee02d2c7d" containerName="extract-utilities" Jan 30 21:59:49 crc kubenswrapper[4869]: E0130 21:59:49.562187 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b68b2a67-0222-4d82-bc30-a93ee02d2c7d" containerName="extract-content" Jan 30 21:59:49 crc kubenswrapper[4869]: I0130 21:59:49.562194 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68b2a67-0222-4d82-bc30-a93ee02d2c7d" containerName="extract-content" Jan 30 21:59:49 crc kubenswrapper[4869]: I0130 21:59:49.562290 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b68b2a67-0222-4d82-bc30-a93ee02d2c7d" containerName="registry-server" Jan 30 21:59:49 crc kubenswrapper[4869]: I0130 21:59:49.562986 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-9pzmz" Jan 30 21:59:49 crc kubenswrapper[4869]: I0130 21:59:49.566330 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-wn7bv" Jan 30 21:59:49 crc kubenswrapper[4869]: I0130 21:59:49.576783 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-9pzmz"] Jan 30 21:59:49 crc kubenswrapper[4869]: I0130 21:59:49.608078 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9bkv\" (UniqueName: \"kubernetes.io/projected/29afe53d-3124-4365-ab6a-abe5b7630a4b-kube-api-access-c9bkv\") pod \"keystone-operator-index-9pzmz\" (UID: \"29afe53d-3124-4365-ab6a-abe5b7630a4b\") " pod="openstack-operators/keystone-operator-index-9pzmz" Jan 30 21:59:49 crc kubenswrapper[4869]: I0130 21:59:49.709061 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9bkv\" (UniqueName: \"kubernetes.io/projected/29afe53d-3124-4365-ab6a-abe5b7630a4b-kube-api-access-c9bkv\") pod \"keystone-operator-index-9pzmz\" (UID: \"29afe53d-3124-4365-ab6a-abe5b7630a4b\") " pod="openstack-operators/keystone-operator-index-9pzmz" Jan 30 21:59:49 crc kubenswrapper[4869]: I0130 21:59:49.726853 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9bkv\" (UniqueName: \"kubernetes.io/projected/29afe53d-3124-4365-ab6a-abe5b7630a4b-kube-api-access-c9bkv\") pod \"keystone-operator-index-9pzmz\" (UID: \"29afe53d-3124-4365-ab6a-abe5b7630a4b\") " pod="openstack-operators/keystone-operator-index-9pzmz" Jan 30 21:59:49 crc kubenswrapper[4869]: I0130 21:59:49.887509 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b68b2a67-0222-4d82-bc30-a93ee02d2c7d" path="/var/lib/kubelet/pods/b68b2a67-0222-4d82-bc30-a93ee02d2c7d/volumes" Jan 30 21:59:49 crc kubenswrapper[4869]: I0130 21:59:49.888644 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-9pzmz" Jan 30 21:59:50 crc kubenswrapper[4869]: I0130 21:59:50.261251 4869 generic.go:334] "Generic (PLEG): container finished" podID="0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2" containerID="3e684bd14bff2a8ee272efa8825bba1486da0af77d790af41a07e47db68a84f7" exitCode=0 Jan 30 21:59:50 crc kubenswrapper[4869]: I0130 21:59:50.261313 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhzgf" event={"ID":"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2","Type":"ContainerDied","Data":"3e684bd14bff2a8ee272efa8825bba1486da0af77d790af41a07e47db68a84f7"} Jan 30 21:59:50 crc kubenswrapper[4869]: I0130 21:59:50.265827 4869 generic.go:334] "Generic (PLEG): container finished" podID="df94940f-3505-4516-94c5-19c8a367e1c8" containerID="8a3bb5a3cbbdd3731643b7efdb3b05e9ec08155fbaf5548b43dd5e4e277972cf" exitCode=0 Jan 30 21:59:50 crc kubenswrapper[4869]: I0130 21:59:50.265872 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vwt7" event={"ID":"df94940f-3505-4516-94c5-19c8a367e1c8","Type":"ContainerDied","Data":"8a3bb5a3cbbdd3731643b7efdb3b05e9ec08155fbaf5548b43dd5e4e277972cf"} Jan 30 21:59:50 crc kubenswrapper[4869]: I0130 21:59:50.265974 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vwt7" event={"ID":"df94940f-3505-4516-94c5-19c8a367e1c8","Type":"ContainerStarted","Data":"597221ab3183ee0958cd2ee0f61fb340ed0d1f30609d8a21bc1cc27aaee2b7fa"} Jan 30 21:59:50 crc kubenswrapper[4869]: I0130 21:59:50.336771 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-9pzmz"] Jan 30 21:59:51 crc kubenswrapper[4869]: I0130 21:59:51.272189 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"111e74f4-fd99-4f7d-8057-43794129795f","Type":"ContainerStarted","Data":"cd4655fc40703668ded911e78b28ac16c883c993e9883ba3dc852f674cd2d7f3"} Jan 30 21:59:51 crc kubenswrapper[4869]: I0130 21:59:51.273795 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vwt7" event={"ID":"df94940f-3505-4516-94c5-19c8a367e1c8","Type":"ContainerStarted","Data":"d1a9adefb762188220c6ad838749dd313875d3f7f8f3224b991535c5348c64a7"} Jan 30 21:59:51 crc kubenswrapper[4869]: I0130 21:59:51.274972 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-9pzmz" event={"ID":"29afe53d-3124-4365-ab6a-abe5b7630a4b","Type":"ContainerStarted","Data":"1ada8ac0847e2bcfafedf0759284be1ef10471bb8261da088ea27dfa0ebf0ce4"} Jan 30 21:59:51 crc kubenswrapper[4869]: I0130 21:59:51.277500 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhzgf" event={"ID":"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2","Type":"ContainerStarted","Data":"fc486649e3343523282c305545ba7d4c0d86fd84865eeedeed2597584670de1d"} Jan 30 21:59:51 crc kubenswrapper[4869]: I0130 21:59:51.327703 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hhzgf" podStartSLOduration=3.7873752720000002 podStartE2EDuration="10.327684958s" podCreationTimestamp="2026-01-30 21:59:41 +0000 UTC" firstStartedPulling="2026-01-30 21:59:44.190760662 +0000 UTC m=+985.076518677" lastFinishedPulling="2026-01-30 21:59:50.731070338 +0000 UTC m=+991.616828363" observedRunningTime="2026-01-30 21:59:51.323575558 +0000 UTC m=+992.209333593" watchObservedRunningTime="2026-01-30 21:59:51.327684958 +0000 UTC m=+992.213442983" Jan 30 21:59:51 crc kubenswrapper[4869]: I0130 21:59:51.884948 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hhzgf" Jan 30 21:59:51 crc kubenswrapper[4869]: I0130 21:59:51.884989 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hhzgf" Jan 30 21:59:52 crc kubenswrapper[4869]: I0130 21:59:52.285460 4869 generic.go:334] "Generic (PLEG): container finished" podID="df94940f-3505-4516-94c5-19c8a367e1c8" containerID="d1a9adefb762188220c6ad838749dd313875d3f7f8f3224b991535c5348c64a7" exitCode=0 Jan 30 21:59:52 crc kubenswrapper[4869]: I0130 21:59:52.285580 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vwt7" event={"ID":"df94940f-3505-4516-94c5-19c8a367e1c8","Type":"ContainerDied","Data":"d1a9adefb762188220c6ad838749dd313875d3f7f8f3224b991535c5348c64a7"} Jan 30 21:59:52 crc kubenswrapper[4869]: I0130 21:59:52.287917 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-9pzmz" event={"ID":"29afe53d-3124-4365-ab6a-abe5b7630a4b","Type":"ContainerStarted","Data":"991abdb178acfbdc0da2d80bb25d10561ca93133022efa522b61f9e61b654835"} Jan 30 21:59:52 crc kubenswrapper[4869]: I0130 21:59:52.356657 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-9pzmz" podStartSLOduration=2.426643792 podStartE2EDuration="3.356626247s" podCreationTimestamp="2026-01-30 21:59:49 +0000 UTC" firstStartedPulling="2026-01-30 21:59:50.376019021 +0000 UTC m=+991.261777046" lastFinishedPulling="2026-01-30 21:59:51.306001476 +0000 UTC m=+992.191759501" observedRunningTime="2026-01-30 21:59:52.352431505 +0000 UTC m=+993.238189540" watchObservedRunningTime="2026-01-30 21:59:52.356626247 +0000 UTC m=+993.242384262" Jan 30 21:59:52 crc kubenswrapper[4869]: I0130 21:59:52.936452 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hhzgf" podUID="0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2" containerName="registry-server" probeResult="failure" output=< Jan 30 21:59:52 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Jan 30 21:59:52 crc kubenswrapper[4869]: > Jan 30 21:59:53 crc kubenswrapper[4869]: I0130 21:59:53.305819 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vwt7" event={"ID":"df94940f-3505-4516-94c5-19c8a367e1c8","Type":"ContainerStarted","Data":"48c0ebd10a9344ff774bf24a8af6e313e69b70c8339d269db2b929d6d1c3c5cc"} Jan 30 21:59:53 crc kubenswrapper[4869]: I0130 21:59:53.341195 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8vwt7" podStartSLOduration=4.946130098 podStartE2EDuration="7.341159998s" podCreationTimestamp="2026-01-30 21:59:46 +0000 UTC" firstStartedPulling="2026-01-30 21:59:50.267470185 +0000 UTC m=+991.153228210" lastFinishedPulling="2026-01-30 21:59:52.662500085 +0000 UTC m=+993.548258110" observedRunningTime="2026-01-30 21:59:53.330527493 +0000 UTC m=+994.216285518" watchObservedRunningTime="2026-01-30 21:59:53.341159998 +0000 UTC m=+994.226918023" Jan 30 21:59:56 crc kubenswrapper[4869]: I0130 21:59:56.700153 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8vwt7" Jan 30 21:59:56 crc kubenswrapper[4869]: I0130 21:59:56.700492 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8vwt7" Jan 30 21:59:57 crc kubenswrapper[4869]: I0130 21:59:57.743411 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8vwt7" podUID="df94940f-3505-4516-94c5-19c8a367e1c8" containerName="registry-server" probeResult="failure" output=< Jan 30 21:59:57 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Jan 30 21:59:57 crc kubenswrapper[4869]: > Jan 30 21:59:59 crc kubenswrapper[4869]: I0130 21:59:59.888987 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-9pzmz" Jan 30 21:59:59 crc kubenswrapper[4869]: I0130 21:59:59.889038 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-9pzmz" Jan 30 21:59:59 crc kubenswrapper[4869]: I0130 21:59:59.916227 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-9pzmz" Jan 30 22:00:00 crc kubenswrapper[4869]: I0130 22:00:00.146183 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8"] Jan 30 22:00:00 crc kubenswrapper[4869]: I0130 22:00:00.147239 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8" Jan 30 22:00:00 crc kubenswrapper[4869]: I0130 22:00:00.153279 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 22:00:00 crc kubenswrapper[4869]: I0130 22:00:00.153568 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 22:00:00 crc kubenswrapper[4869]: I0130 22:00:00.163034 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8"] Jan 30 22:00:00 crc kubenswrapper[4869]: I0130 22:00:00.252379 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2z47\" (UniqueName: \"kubernetes.io/projected/39690947-2202-4f51-95ef-c462f923efe7-kube-api-access-t2z47\") pod \"collect-profiles-29496840-x7hw8\" (UID: \"39690947-2202-4f51-95ef-c462f923efe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8" Jan 30 22:00:00 crc kubenswrapper[4869]: I0130 22:00:00.252490 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39690947-2202-4f51-95ef-c462f923efe7-secret-volume\") pod \"collect-profiles-29496840-x7hw8\" (UID: \"39690947-2202-4f51-95ef-c462f923efe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8" Jan 30 22:00:00 crc kubenswrapper[4869]: I0130 22:00:00.252531 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39690947-2202-4f51-95ef-c462f923efe7-config-volume\") pod \"collect-profiles-29496840-x7hw8\" (UID: \"39690947-2202-4f51-95ef-c462f923efe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8" Jan 30 22:00:00 crc kubenswrapper[4869]: I0130 22:00:00.353628 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2z47\" (UniqueName: \"kubernetes.io/projected/39690947-2202-4f51-95ef-c462f923efe7-kube-api-access-t2z47\") pod \"collect-profiles-29496840-x7hw8\" (UID: \"39690947-2202-4f51-95ef-c462f923efe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8" Jan 30 22:00:00 crc kubenswrapper[4869]: I0130 22:00:00.353688 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39690947-2202-4f51-95ef-c462f923efe7-secret-volume\") pod \"collect-profiles-29496840-x7hw8\" (UID: \"39690947-2202-4f51-95ef-c462f923efe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8" Jan 30 22:00:00 crc kubenswrapper[4869]: I0130 22:00:00.353728 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39690947-2202-4f51-95ef-c462f923efe7-config-volume\") pod \"collect-profiles-29496840-x7hw8\" (UID: \"39690947-2202-4f51-95ef-c462f923efe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8" Jan 30 22:00:00 crc kubenswrapper[4869]: I0130 22:00:00.359430 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39690947-2202-4f51-95ef-c462f923efe7-config-volume\") pod \"collect-profiles-29496840-x7hw8\" (UID: \"39690947-2202-4f51-95ef-c462f923efe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8" Jan 30 22:00:00 crc kubenswrapper[4869]: I0130 22:00:00.380142 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2z47\" (UniqueName: \"kubernetes.io/projected/39690947-2202-4f51-95ef-c462f923efe7-kube-api-access-t2z47\") pod \"collect-profiles-29496840-x7hw8\" (UID: \"39690947-2202-4f51-95ef-c462f923efe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8" Jan 30 22:00:00 crc kubenswrapper[4869]: I0130 22:00:00.384632 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-9pzmz" Jan 30 22:00:00 crc kubenswrapper[4869]: I0130 22:00:00.385419 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39690947-2202-4f51-95ef-c462f923efe7-secret-volume\") pod \"collect-profiles-29496840-x7hw8\" (UID: \"39690947-2202-4f51-95ef-c462f923efe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8" Jan 30 22:00:00 crc kubenswrapper[4869]: I0130 22:00:00.500782 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8" Jan 30 22:00:00 crc kubenswrapper[4869]: I0130 22:00:00.948245 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8"] Jan 30 22:00:01 crc kubenswrapper[4869]: I0130 22:00:01.361415 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8" event={"ID":"39690947-2202-4f51-95ef-c462f923efe7","Type":"ContainerStarted","Data":"219b8c550ce1df0ba170aa2f4905fa2dde8739cb9b3127fd3f09e96e1aebeef9"} Jan 30 22:00:01 crc kubenswrapper[4869]: I0130 22:00:01.361865 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8" event={"ID":"39690947-2202-4f51-95ef-c462f923efe7","Type":"ContainerStarted","Data":"e518d1ada0ce16740038bf7862444310b556a28ec38587c7f65512bfdee52c99"} Jan 30 22:00:01 crc kubenswrapper[4869]: I0130 22:00:01.934022 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hhzgf" Jan 30 22:00:01 crc kubenswrapper[4869]: I0130 22:00:01.955278 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8" podStartSLOduration=1.955242972 podStartE2EDuration="1.955242972s" podCreationTimestamp="2026-01-30 22:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:00:01.38241425 +0000 UTC m=+1002.268172275" watchObservedRunningTime="2026-01-30 22:00:01.955242972 +0000 UTC m=+1002.841000997" Jan 30 22:00:01 crc kubenswrapper[4869]: I0130 22:00:01.990172 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:00:01 crc kubenswrapper[4869]: I0130 22:00:01.990234 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:00:01 crc kubenswrapper[4869]: I0130 22:00:01.991235 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw"] Jan 30 22:00:01 crc kubenswrapper[4869]: I0130 22:00:01.992337 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hhzgf" Jan 30 22:00:01 crc kubenswrapper[4869]: I0130 22:00:01.992417 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw" Jan 30 22:00:01 crc kubenswrapper[4869]: I0130 22:00:01.993962 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jpqpq" Jan 30 22:00:02 crc kubenswrapper[4869]: I0130 22:00:02.011994 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw"] Jan 30 22:00:02 crc kubenswrapper[4869]: I0130 22:00:02.076005 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad3d5522-788a-47a8-8f82-cab1a12966ad-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw\" (UID: \"ad3d5522-788a-47a8-8f82-cab1a12966ad\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw" Jan 30 22:00:02 crc kubenswrapper[4869]: I0130 22:00:02.076133 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad3d5522-788a-47a8-8f82-cab1a12966ad-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw\" (UID: \"ad3d5522-788a-47a8-8f82-cab1a12966ad\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw" Jan 30 22:00:02 crc kubenswrapper[4869]: I0130 22:00:02.076182 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g9dj\" (UniqueName: \"kubernetes.io/projected/ad3d5522-788a-47a8-8f82-cab1a12966ad-kube-api-access-6g9dj\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw\" (UID: \"ad3d5522-788a-47a8-8f82-cab1a12966ad\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw" Jan 30 22:00:02 crc kubenswrapper[4869]: I0130 22:00:02.177591 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad3d5522-788a-47a8-8f82-cab1a12966ad-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw\" (UID: \"ad3d5522-788a-47a8-8f82-cab1a12966ad\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw" Jan 30 22:00:02 crc kubenswrapper[4869]: I0130 22:00:02.177647 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g9dj\" (UniqueName: \"kubernetes.io/projected/ad3d5522-788a-47a8-8f82-cab1a12966ad-kube-api-access-6g9dj\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw\" (UID: \"ad3d5522-788a-47a8-8f82-cab1a12966ad\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw" Jan 30 22:00:02 crc kubenswrapper[4869]: I0130 22:00:02.177687 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad3d5522-788a-47a8-8f82-cab1a12966ad-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw\" (UID: \"ad3d5522-788a-47a8-8f82-cab1a12966ad\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw" Jan 30 22:00:02 crc kubenswrapper[4869]: I0130 22:00:02.178111 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad3d5522-788a-47a8-8f82-cab1a12966ad-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw\" (UID: \"ad3d5522-788a-47a8-8f82-cab1a12966ad\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw" Jan 30 22:00:02 crc kubenswrapper[4869]: I0130 22:00:02.178127 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad3d5522-788a-47a8-8f82-cab1a12966ad-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw\" (UID: \"ad3d5522-788a-47a8-8f82-cab1a12966ad\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw" Jan 30 22:00:02 crc kubenswrapper[4869]: I0130 22:00:02.196656 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g9dj\" (UniqueName: \"kubernetes.io/projected/ad3d5522-788a-47a8-8f82-cab1a12966ad-kube-api-access-6g9dj\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw\" (UID: \"ad3d5522-788a-47a8-8f82-cab1a12966ad\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw" Jan 30 22:00:02 crc kubenswrapper[4869]: I0130 22:00:02.315702 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw" Jan 30 22:00:02 crc kubenswrapper[4869]: I0130 22:00:02.370384 4869 generic.go:334] "Generic (PLEG): container finished" podID="39690947-2202-4f51-95ef-c462f923efe7" containerID="219b8c550ce1df0ba170aa2f4905fa2dde8739cb9b3127fd3f09e96e1aebeef9" exitCode=0 Jan 30 22:00:02 crc kubenswrapper[4869]: I0130 22:00:02.370578 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8" event={"ID":"39690947-2202-4f51-95ef-c462f923efe7","Type":"ContainerDied","Data":"219b8c550ce1df0ba170aa2f4905fa2dde8739cb9b3127fd3f09e96e1aebeef9"} Jan 30 22:00:02 crc kubenswrapper[4869]: I0130 22:00:02.738962 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw"] Jan 30 22:00:02 crc kubenswrapper[4869]: W0130 22:00:02.746502 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad3d5522_788a_47a8_8f82_cab1a12966ad.slice/crio-71ec8f7cfb7891a6f0f9a499c69601970b91e81f07660ae8f35906a4aa2cfbee WatchSource:0}: Error finding container 71ec8f7cfb7891a6f0f9a499c69601970b91e81f07660ae8f35906a4aa2cfbee: Status 404 returned error can't find the container with id 71ec8f7cfb7891a6f0f9a499c69601970b91e81f07660ae8f35906a4aa2cfbee Jan 30 22:00:03 crc kubenswrapper[4869]: I0130 22:00:03.378323 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw" event={"ID":"ad3d5522-788a-47a8-8f82-cab1a12966ad","Type":"ContainerStarted","Data":"71ec8f7cfb7891a6f0f9a499c69601970b91e81f07660ae8f35906a4aa2cfbee"} Jan 30 22:00:03 crc kubenswrapper[4869]: I0130 22:00:03.635211 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8" Jan 30 22:00:03 crc kubenswrapper[4869]: I0130 22:00:03.802082 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2z47\" (UniqueName: \"kubernetes.io/projected/39690947-2202-4f51-95ef-c462f923efe7-kube-api-access-t2z47\") pod \"39690947-2202-4f51-95ef-c462f923efe7\" (UID: \"39690947-2202-4f51-95ef-c462f923efe7\") " Jan 30 22:00:03 crc kubenswrapper[4869]: I0130 22:00:03.802163 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39690947-2202-4f51-95ef-c462f923efe7-config-volume\") pod \"39690947-2202-4f51-95ef-c462f923efe7\" (UID: \"39690947-2202-4f51-95ef-c462f923efe7\") " Jan 30 22:00:03 crc kubenswrapper[4869]: I0130 22:00:03.802318 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39690947-2202-4f51-95ef-c462f923efe7-secret-volume\") pod \"39690947-2202-4f51-95ef-c462f923efe7\" (UID: \"39690947-2202-4f51-95ef-c462f923efe7\") " Jan 30 22:00:03 crc kubenswrapper[4869]: I0130 22:00:03.803152 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39690947-2202-4f51-95ef-c462f923efe7-config-volume" (OuterVolumeSpecName: "config-volume") pod "39690947-2202-4f51-95ef-c462f923efe7" (UID: "39690947-2202-4f51-95ef-c462f923efe7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:03 crc kubenswrapper[4869]: I0130 22:00:03.808335 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39690947-2202-4f51-95ef-c462f923efe7-kube-api-access-t2z47" (OuterVolumeSpecName: "kube-api-access-t2z47") pod "39690947-2202-4f51-95ef-c462f923efe7" (UID: "39690947-2202-4f51-95ef-c462f923efe7"). InnerVolumeSpecName "kube-api-access-t2z47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:03 crc kubenswrapper[4869]: I0130 22:00:03.808478 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39690947-2202-4f51-95ef-c462f923efe7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "39690947-2202-4f51-95ef-c462f923efe7" (UID: "39690947-2202-4f51-95ef-c462f923efe7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:03 crc kubenswrapper[4869]: I0130 22:00:03.903677 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39690947-2202-4f51-95ef-c462f923efe7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:03 crc kubenswrapper[4869]: I0130 22:00:03.903713 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2z47\" (UniqueName: \"kubernetes.io/projected/39690947-2202-4f51-95ef-c462f923efe7-kube-api-access-t2z47\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:03 crc kubenswrapper[4869]: I0130 22:00:03.903725 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39690947-2202-4f51-95ef-c462f923efe7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:04 crc kubenswrapper[4869]: I0130 22:00:04.391299 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8" event={"ID":"39690947-2202-4f51-95ef-c462f923efe7","Type":"ContainerDied","Data":"e518d1ada0ce16740038bf7862444310b556a28ec38587c7f65512bfdee52c99"} Jan 30 22:00:04 crc kubenswrapper[4869]: I0130 22:00:04.391329 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-x7hw8" Jan 30 22:00:04 crc kubenswrapper[4869]: I0130 22:00:04.391344 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e518d1ada0ce16740038bf7862444310b556a28ec38587c7f65512bfdee52c99" Jan 30 22:00:04 crc kubenswrapper[4869]: I0130 22:00:04.392884 4869 generic.go:334] "Generic (PLEG): container finished" podID="ad3d5522-788a-47a8-8f82-cab1a12966ad" containerID="281427bf1e5c442d2cbb3ea76325ad3c318e506d18fc520094ce4db1d578ce55" exitCode=0 Jan 30 22:00:04 crc kubenswrapper[4869]: I0130 22:00:04.392938 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw" event={"ID":"ad3d5522-788a-47a8-8f82-cab1a12966ad","Type":"ContainerDied","Data":"281427bf1e5c442d2cbb3ea76325ad3c318e506d18fc520094ce4db1d578ce55"} Jan 30 22:00:05 crc kubenswrapper[4869]: I0130 22:00:05.400066 4869 generic.go:334] "Generic (PLEG): container finished" podID="ad3d5522-788a-47a8-8f82-cab1a12966ad" containerID="da7d1e5d796108713a867864f44ee22c88cebe365e7c7ac36f5d999dea8beac9" exitCode=0 Jan 30 22:00:05 crc kubenswrapper[4869]: I0130 22:00:05.400365 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw" event={"ID":"ad3d5522-788a-47a8-8f82-cab1a12966ad","Type":"ContainerDied","Data":"da7d1e5d796108713a867864f44ee22c88cebe365e7c7ac36f5d999dea8beac9"} Jan 30 22:00:06 crc kubenswrapper[4869]: I0130 22:00:06.407297 4869 generic.go:334] "Generic (PLEG): container finished" podID="ad3d5522-788a-47a8-8f82-cab1a12966ad" containerID="b5b68a766153b7c1740db5d50f8417c5a60993c1095b44ba6b537ec44280ef15" exitCode=0 Jan 30 22:00:06 crc kubenswrapper[4869]: I0130 22:00:06.408039 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw" event={"ID":"ad3d5522-788a-47a8-8f82-cab1a12966ad","Type":"ContainerDied","Data":"b5b68a766153b7c1740db5d50f8417c5a60993c1095b44ba6b537ec44280ef15"} Jan 30 22:00:06 crc kubenswrapper[4869]: I0130 22:00:06.739137 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8vwt7" Jan 30 22:00:06 crc kubenswrapper[4869]: I0130 22:00:06.781674 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8vwt7" Jan 30 22:00:06 crc kubenswrapper[4869]: I0130 22:00:06.949148 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hhzgf"] Jan 30 22:00:06 crc kubenswrapper[4869]: I0130 22:00:06.949353 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hhzgf" podUID="0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2" containerName="registry-server" containerID="cri-o://fc486649e3343523282c305545ba7d4c0d86fd84865eeedeed2597584670de1d" gracePeriod=2 Jan 30 22:00:07 crc kubenswrapper[4869]: I0130 22:00:07.416434 4869 generic.go:334] "Generic (PLEG): container finished" podID="0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2" containerID="fc486649e3343523282c305545ba7d4c0d86fd84865eeedeed2597584670de1d" exitCode=0 Jan 30 22:00:07 crc kubenswrapper[4869]: I0130 22:00:07.416505 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhzgf" event={"ID":"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2","Type":"ContainerDied","Data":"fc486649e3343523282c305545ba7d4c0d86fd84865eeedeed2597584670de1d"} Jan 30 22:00:07 crc kubenswrapper[4869]: I0130 22:00:07.722144 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw" Jan 30 22:00:07 crc kubenswrapper[4869]: I0130 22:00:07.854965 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad3d5522-788a-47a8-8f82-cab1a12966ad-bundle\") pod \"ad3d5522-788a-47a8-8f82-cab1a12966ad\" (UID: \"ad3d5522-788a-47a8-8f82-cab1a12966ad\") " Jan 30 22:00:07 crc kubenswrapper[4869]: I0130 22:00:07.855031 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad3d5522-788a-47a8-8f82-cab1a12966ad-util\") pod \"ad3d5522-788a-47a8-8f82-cab1a12966ad\" (UID: \"ad3d5522-788a-47a8-8f82-cab1a12966ad\") " Jan 30 22:00:07 crc kubenswrapper[4869]: I0130 22:00:07.855127 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g9dj\" (UniqueName: \"kubernetes.io/projected/ad3d5522-788a-47a8-8f82-cab1a12966ad-kube-api-access-6g9dj\") pod \"ad3d5522-788a-47a8-8f82-cab1a12966ad\" (UID: \"ad3d5522-788a-47a8-8f82-cab1a12966ad\") " Jan 30 22:00:07 crc kubenswrapper[4869]: I0130 22:00:07.856070 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad3d5522-788a-47a8-8f82-cab1a12966ad-bundle" (OuterVolumeSpecName: "bundle") pod "ad3d5522-788a-47a8-8f82-cab1a12966ad" (UID: "ad3d5522-788a-47a8-8f82-cab1a12966ad"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:00:07 crc kubenswrapper[4869]: I0130 22:00:07.863263 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad3d5522-788a-47a8-8f82-cab1a12966ad-kube-api-access-6g9dj" (OuterVolumeSpecName: "kube-api-access-6g9dj") pod "ad3d5522-788a-47a8-8f82-cab1a12966ad" (UID: "ad3d5522-788a-47a8-8f82-cab1a12966ad"). InnerVolumeSpecName "kube-api-access-6g9dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:07 crc kubenswrapper[4869]: I0130 22:00:07.884522 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad3d5522-788a-47a8-8f82-cab1a12966ad-util" (OuterVolumeSpecName: "util") pod "ad3d5522-788a-47a8-8f82-cab1a12966ad" (UID: "ad3d5522-788a-47a8-8f82-cab1a12966ad"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:00:07 crc kubenswrapper[4869]: I0130 22:00:07.920334 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hhzgf" Jan 30 22:00:07 crc kubenswrapper[4869]: I0130 22:00:07.956777 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad3d5522-788a-47a8-8f82-cab1a12966ad-util\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:07 crc kubenswrapper[4869]: I0130 22:00:07.956841 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g9dj\" (UniqueName: \"kubernetes.io/projected/ad3d5522-788a-47a8-8f82-cab1a12966ad-kube-api-access-6g9dj\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:07 crc kubenswrapper[4869]: I0130 22:00:07.956855 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad3d5522-788a-47a8-8f82-cab1a12966ad-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.057631 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2-catalog-content\") pod \"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2\" (UID: \"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2\") " Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.057727 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5rbr\" (UniqueName: \"kubernetes.io/projected/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2-kube-api-access-v5rbr\") pod \"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2\" (UID: \"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2\") " Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.057795 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2-utilities\") pod \"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2\" (UID: \"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2\") " Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.058826 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2-utilities" (OuterVolumeSpecName: "utilities") pod "0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2" (UID: "0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.060847 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2-kube-api-access-v5rbr" (OuterVolumeSpecName: "kube-api-access-v5rbr") pod "0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2" (UID: "0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2"). InnerVolumeSpecName "kube-api-access-v5rbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.108201 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2" (UID: "0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.160500 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.160539 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.160554 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5rbr\" (UniqueName: \"kubernetes.io/projected/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2-kube-api-access-v5rbr\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.436350 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw" event={"ID":"ad3d5522-788a-47a8-8f82-cab1a12966ad","Type":"ContainerDied","Data":"71ec8f7cfb7891a6f0f9a499c69601970b91e81f07660ae8f35906a4aa2cfbee"} Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.437163 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71ec8f7cfb7891a6f0f9a499c69601970b91e81f07660ae8f35906a4aa2cfbee" Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.436377 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw" Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.439181 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhzgf" event={"ID":"0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2","Type":"ContainerDied","Data":"09e4af0dbdb9830acee4bc2d66bfb893dbede6c5e0d1f0d92120966ac9065b9d"} Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.439278 4869 scope.go:117] "RemoveContainer" containerID="fc486649e3343523282c305545ba7d4c0d86fd84865eeedeed2597584670de1d" Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.439447 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hhzgf" Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.476681 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hhzgf"] Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.478746 4869 scope.go:117] "RemoveContainer" containerID="3e684bd14bff2a8ee272efa8825bba1486da0af77d790af41a07e47db68a84f7" Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.484488 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hhzgf"] Jan 30 22:00:08 crc kubenswrapper[4869]: I0130 22:00:08.521099 4869 scope.go:117] "RemoveContainer" containerID="00747f47d17aae6ae77cd46ca3620a79adb995417a5ee59384271a0728b10ce4" Jan 30 22:00:09 crc kubenswrapper[4869]: I0130 22:00:09.885015 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2" path="/var/lib/kubelet/pods/0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2/volumes" Jan 30 22:00:12 crc kubenswrapper[4869]: I0130 22:00:12.953609 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8vwt7"] Jan 30 22:00:12 crc kubenswrapper[4869]: I0130 22:00:12.954699 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8vwt7" podUID="df94940f-3505-4516-94c5-19c8a367e1c8" containerName="registry-server" containerID="cri-o://48c0ebd10a9344ff774bf24a8af6e313e69b70c8339d269db2b929d6d1c3c5cc" gracePeriod=2 Jan 30 22:00:13 crc kubenswrapper[4869]: I0130 22:00:13.478928 4869 generic.go:334] "Generic (PLEG): container finished" podID="df94940f-3505-4516-94c5-19c8a367e1c8" containerID="48c0ebd10a9344ff774bf24a8af6e313e69b70c8339d269db2b929d6d1c3c5cc" exitCode=0 Jan 30 22:00:13 crc kubenswrapper[4869]: I0130 22:00:13.478957 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vwt7" event={"ID":"df94940f-3505-4516-94c5-19c8a367e1c8","Type":"ContainerDied","Data":"48c0ebd10a9344ff774bf24a8af6e313e69b70c8339d269db2b929d6d1c3c5cc"} Jan 30 22:00:13 crc kubenswrapper[4869]: I0130 22:00:13.884269 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vwt7" Jan 30 22:00:13 crc kubenswrapper[4869]: I0130 22:00:13.963289 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df94940f-3505-4516-94c5-19c8a367e1c8-catalog-content\") pod \"df94940f-3505-4516-94c5-19c8a367e1c8\" (UID: \"df94940f-3505-4516-94c5-19c8a367e1c8\") " Jan 30 22:00:13 crc kubenswrapper[4869]: I0130 22:00:13.963473 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df94940f-3505-4516-94c5-19c8a367e1c8-utilities\") pod \"df94940f-3505-4516-94c5-19c8a367e1c8\" (UID: \"df94940f-3505-4516-94c5-19c8a367e1c8\") " Jan 30 22:00:13 crc kubenswrapper[4869]: I0130 22:00:13.963583 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmfth\" (UniqueName: \"kubernetes.io/projected/df94940f-3505-4516-94c5-19c8a367e1c8-kube-api-access-dmfth\") pod \"df94940f-3505-4516-94c5-19c8a367e1c8\" (UID: \"df94940f-3505-4516-94c5-19c8a367e1c8\") " Jan 30 22:00:13 crc kubenswrapper[4869]: I0130 22:00:13.965817 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df94940f-3505-4516-94c5-19c8a367e1c8-utilities" (OuterVolumeSpecName: "utilities") pod "df94940f-3505-4516-94c5-19c8a367e1c8" (UID: "df94940f-3505-4516-94c5-19c8a367e1c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:00:13 crc kubenswrapper[4869]: I0130 22:00:13.977217 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df94940f-3505-4516-94c5-19c8a367e1c8-kube-api-access-dmfth" (OuterVolumeSpecName: "kube-api-access-dmfth") pod "df94940f-3505-4516-94c5-19c8a367e1c8" (UID: "df94940f-3505-4516-94c5-19c8a367e1c8"). InnerVolumeSpecName "kube-api-access-dmfth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:14 crc kubenswrapper[4869]: I0130 22:00:14.072766 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmfth\" (UniqueName: \"kubernetes.io/projected/df94940f-3505-4516-94c5-19c8a367e1c8-kube-api-access-dmfth\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:14 crc kubenswrapper[4869]: I0130 22:00:14.072808 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df94940f-3505-4516-94c5-19c8a367e1c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:14 crc kubenswrapper[4869]: I0130 22:00:14.123682 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df94940f-3505-4516-94c5-19c8a367e1c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df94940f-3505-4516-94c5-19c8a367e1c8" (UID: "df94940f-3505-4516-94c5-19c8a367e1c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:00:14 crc kubenswrapper[4869]: I0130 22:00:14.173990 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df94940f-3505-4516-94c5-19c8a367e1c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:14 crc kubenswrapper[4869]: I0130 22:00:14.487931 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vwt7" event={"ID":"df94940f-3505-4516-94c5-19c8a367e1c8","Type":"ContainerDied","Data":"597221ab3183ee0958cd2ee0f61fb340ed0d1f30609d8a21bc1cc27aaee2b7fa"} Jan 30 22:00:14 crc kubenswrapper[4869]: I0130 22:00:14.488000 4869 scope.go:117] "RemoveContainer" containerID="48c0ebd10a9344ff774bf24a8af6e313e69b70c8339d269db2b929d6d1c3c5cc" Jan 30 22:00:14 crc kubenswrapper[4869]: I0130 22:00:14.488092 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vwt7" Jan 30 22:00:14 crc kubenswrapper[4869]: I0130 22:00:14.519577 4869 scope.go:117] "RemoveContainer" containerID="d1a9adefb762188220c6ad838749dd313875d3f7f8f3224b991535c5348c64a7" Jan 30 22:00:14 crc kubenswrapper[4869]: I0130 22:00:14.551113 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8vwt7"] Jan 30 22:00:14 crc kubenswrapper[4869]: I0130 22:00:14.554737 4869 scope.go:117] "RemoveContainer" containerID="8a3bb5a3cbbdd3731643b7efdb3b05e9ec08155fbaf5548b43dd5e4e277972cf" Jan 30 22:00:14 crc kubenswrapper[4869]: I0130 22:00:14.555812 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8vwt7"] Jan 30 22:00:15 crc kubenswrapper[4869]: I0130 22:00:15.884344 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df94940f-3505-4516-94c5-19c8a367e1c8" path="/var/lib/kubelet/pods/df94940f-3505-4516-94c5-19c8a367e1c8/volumes" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.362922 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz"] Jan 30 22:00:17 crc kubenswrapper[4869]: E0130 22:00:17.363261 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df94940f-3505-4516-94c5-19c8a367e1c8" containerName="extract-content" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.363276 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="df94940f-3505-4516-94c5-19c8a367e1c8" containerName="extract-content" Jan 30 22:00:17 crc kubenswrapper[4869]: E0130 22:00:17.363287 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df94940f-3505-4516-94c5-19c8a367e1c8" containerName="registry-server" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.363293 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="df94940f-3505-4516-94c5-19c8a367e1c8" containerName="registry-server" Jan 30 22:00:17 crc kubenswrapper[4869]: E0130 22:00:17.363313 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3d5522-788a-47a8-8f82-cab1a12966ad" containerName="pull" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.363320 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3d5522-788a-47a8-8f82-cab1a12966ad" containerName="pull" Jan 30 22:00:17 crc kubenswrapper[4869]: E0130 22:00:17.363330 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df94940f-3505-4516-94c5-19c8a367e1c8" containerName="extract-utilities" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.363336 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="df94940f-3505-4516-94c5-19c8a367e1c8" containerName="extract-utilities" Jan 30 22:00:17 crc kubenswrapper[4869]: E0130 22:00:17.363345 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2" containerName="extract-content" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.363351 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2" containerName="extract-content" Jan 30 22:00:17 crc kubenswrapper[4869]: E0130 22:00:17.363359 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3d5522-788a-47a8-8f82-cab1a12966ad" containerName="util" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.363365 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3d5522-788a-47a8-8f82-cab1a12966ad" containerName="util" Jan 30 22:00:17 crc kubenswrapper[4869]: E0130 22:00:17.363372 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2" containerName="registry-server" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.363378 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2" containerName="registry-server" Jan 30 22:00:17 crc kubenswrapper[4869]: E0130 22:00:17.363389 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39690947-2202-4f51-95ef-c462f923efe7" containerName="collect-profiles" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.363395 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="39690947-2202-4f51-95ef-c462f923efe7" containerName="collect-profiles" Jan 30 22:00:17 crc kubenswrapper[4869]: E0130 22:00:17.363403 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3d5522-788a-47a8-8f82-cab1a12966ad" containerName="extract" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.363409 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3d5522-788a-47a8-8f82-cab1a12966ad" containerName="extract" Jan 30 22:00:17 crc kubenswrapper[4869]: E0130 22:00:17.363419 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2" containerName="extract-utilities" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.363426 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2" containerName="extract-utilities" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.388243 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="df94940f-3505-4516-94c5-19c8a367e1c8" containerName="registry-server" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.388321 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad3d5522-788a-47a8-8f82-cab1a12966ad" containerName="extract" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.388360 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0572dcb2-4223-4ee6-ab95-f6a5f3cdf7b2" containerName="registry-server" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.388390 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="39690947-2202-4f51-95ef-c462f923efe7" containerName="collect-profiles" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.389501 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.393747 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.393972 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-bld7t" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.404118 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz"] Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.522665 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca2d32a5-f5a1-4e59-908b-0d55de2c600f-apiservice-cert\") pod \"keystone-operator-controller-manager-5f98c88f68-6xdqz\" (UID: \"ca2d32a5-f5a1-4e59-908b-0d55de2c600f\") " pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.522728 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9nwl\" (UniqueName: \"kubernetes.io/projected/ca2d32a5-f5a1-4e59-908b-0d55de2c600f-kube-api-access-c9nwl\") pod \"keystone-operator-controller-manager-5f98c88f68-6xdqz\" (UID: \"ca2d32a5-f5a1-4e59-908b-0d55de2c600f\") " pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.522798 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca2d32a5-f5a1-4e59-908b-0d55de2c600f-webhook-cert\") pod \"keystone-operator-controller-manager-5f98c88f68-6xdqz\" (UID: \"ca2d32a5-f5a1-4e59-908b-0d55de2c600f\") " pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.624602 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca2d32a5-f5a1-4e59-908b-0d55de2c600f-webhook-cert\") pod \"keystone-operator-controller-manager-5f98c88f68-6xdqz\" (UID: \"ca2d32a5-f5a1-4e59-908b-0d55de2c600f\") " pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.624716 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca2d32a5-f5a1-4e59-908b-0d55de2c600f-apiservice-cert\") pod \"keystone-operator-controller-manager-5f98c88f68-6xdqz\" (UID: \"ca2d32a5-f5a1-4e59-908b-0d55de2c600f\") " pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.624741 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9nwl\" (UniqueName: \"kubernetes.io/projected/ca2d32a5-f5a1-4e59-908b-0d55de2c600f-kube-api-access-c9nwl\") pod \"keystone-operator-controller-manager-5f98c88f68-6xdqz\" (UID: \"ca2d32a5-f5a1-4e59-908b-0d55de2c600f\") " pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.631488 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca2d32a5-f5a1-4e59-908b-0d55de2c600f-apiservice-cert\") pod \"keystone-operator-controller-manager-5f98c88f68-6xdqz\" (UID: \"ca2d32a5-f5a1-4e59-908b-0d55de2c600f\") " pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.631499 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca2d32a5-f5a1-4e59-908b-0d55de2c600f-webhook-cert\") pod \"keystone-operator-controller-manager-5f98c88f68-6xdqz\" (UID: \"ca2d32a5-f5a1-4e59-908b-0d55de2c600f\") " pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.640764 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9nwl\" (UniqueName: \"kubernetes.io/projected/ca2d32a5-f5a1-4e59-908b-0d55de2c600f-kube-api-access-c9nwl\") pod \"keystone-operator-controller-manager-5f98c88f68-6xdqz\" (UID: \"ca2d32a5-f5a1-4e59-908b-0d55de2c600f\") " pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" Jan 30 22:00:17 crc kubenswrapper[4869]: I0130 22:00:17.720040 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" Jan 30 22:00:18 crc kubenswrapper[4869]: I0130 22:00:18.193407 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz"] Jan 30 22:00:18 crc kubenswrapper[4869]: I0130 22:00:18.521456 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" event={"ID":"ca2d32a5-f5a1-4e59-908b-0d55de2c600f","Type":"ContainerStarted","Data":"f5b040999c3cde9d5db74daac90a47c96b41b5501bf4a80e42d25b80d83e52d3"} Jan 30 22:00:22 crc kubenswrapper[4869]: I0130 22:00:22.552372 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" event={"ID":"ca2d32a5-f5a1-4e59-908b-0d55de2c600f","Type":"ContainerStarted","Data":"66b7c51833dc4cc4fa3068fae54acfb8fe3239b0a2f01451cdac49df35cbff4b"} Jan 30 22:00:22 crc kubenswrapper[4869]: I0130 22:00:22.552933 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" Jan 30 22:00:22 crc kubenswrapper[4869]: I0130 22:00:22.554061 4869 generic.go:334] "Generic (PLEG): container finished" podID="111e74f4-fd99-4f7d-8057-43794129795f" containerID="cd4655fc40703668ded911e78b28ac16c883c993e9883ba3dc852f674cd2d7f3" exitCode=0 Jan 30 22:00:22 crc kubenswrapper[4869]: I0130 22:00:22.554098 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"111e74f4-fd99-4f7d-8057-43794129795f","Type":"ContainerDied","Data":"cd4655fc40703668ded911e78b28ac16c883c993e9883ba3dc852f674cd2d7f3"} Jan 30 22:00:22 crc kubenswrapper[4869]: I0130 22:00:22.573496 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" podStartSLOduration=2.285289029 podStartE2EDuration="5.573472463s" podCreationTimestamp="2026-01-30 22:00:17 +0000 UTC" firstStartedPulling="2026-01-30 22:00:18.203531627 +0000 UTC m=+1019.089289642" lastFinishedPulling="2026-01-30 22:00:21.491715061 +0000 UTC m=+1022.377473076" observedRunningTime="2026-01-30 22:00:22.571847892 +0000 UTC m=+1023.457605927" watchObservedRunningTime="2026-01-30 22:00:22.573472463 +0000 UTC m=+1023.459230498" Jan 30 22:00:23 crc kubenswrapper[4869]: I0130 22:00:23.562560 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"111e74f4-fd99-4f7d-8057-43794129795f","Type":"ContainerStarted","Data":"34563f83938add6f7bcfa6405c612841cd4dbad688d596ba90fc125ad61fb4be"} Jan 30 22:00:23 crc kubenswrapper[4869]: I0130 22:00:23.563249 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 22:00:23 crc kubenswrapper[4869]: I0130 22:00:23.585928 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/rabbitmq-server-0" podStartSLOduration=36.301902089 podStartE2EDuration="44.585886812s" podCreationTimestamp="2026-01-30 21:59:39 +0000 UTC" firstStartedPulling="2026-01-30 21:59:41.038506375 +0000 UTC m=+981.924264400" lastFinishedPulling="2026-01-30 21:59:49.322491098 +0000 UTC m=+990.208249123" observedRunningTime="2026-01-30 22:00:23.584401755 +0000 UTC m=+1024.470159790" watchObservedRunningTime="2026-01-30 22:00:23.585886812 +0000 UTC m=+1024.471644847" Jan 30 22:00:27 crc kubenswrapper[4869]: I0130 22:00:27.726449 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" Jan 30 22:00:29 crc kubenswrapper[4869]: I0130 22:00:29.815119 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-db-create-2wgdn"] Jan 30 22:00:29 crc kubenswrapper[4869]: I0130 22:00:29.815886 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-create-2wgdn" Jan 30 22:00:29 crc kubenswrapper[4869]: I0130 22:00:29.824445 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-db-create-2wgdn"] Jan 30 22:00:29 crc kubenswrapper[4869]: I0130 22:00:29.904038 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgg8r\" (UniqueName: \"kubernetes.io/projected/3f76026f-41ae-4897-a348-ac4c49c2c2c5-kube-api-access-dgg8r\") pod \"keystone-db-create-2wgdn\" (UID: \"3f76026f-41ae-4897-a348-ac4c49c2c2c5\") " pod="cinder-kuttl-tests/keystone-db-create-2wgdn" Jan 30 22:00:29 crc kubenswrapper[4869]: I0130 22:00:29.904116 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f76026f-41ae-4897-a348-ac4c49c2c2c5-operator-scripts\") pod \"keystone-db-create-2wgdn\" (UID: \"3f76026f-41ae-4897-a348-ac4c49c2c2c5\") " pod="cinder-kuttl-tests/keystone-db-create-2wgdn" Jan 30 22:00:29 crc kubenswrapper[4869]: I0130 22:00:29.910431 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-431c-account-create-update-4fcmg"] Jan 30 22:00:29 crc kubenswrapper[4869]: I0130 22:00:29.911116 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-431c-account-create-update-4fcmg"] Jan 30 22:00:29 crc kubenswrapper[4869]: I0130 22:00:29.911202 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-431c-account-create-update-4fcmg" Jan 30 22:00:29 crc kubenswrapper[4869]: I0130 22:00:29.916509 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-db-secret" Jan 30 22:00:30 crc kubenswrapper[4869]: I0130 22:00:30.005523 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448717a9-d5d0-47dc-9b41-ffbefbdbb175-operator-scripts\") pod \"keystone-431c-account-create-update-4fcmg\" (UID: \"448717a9-d5d0-47dc-9b41-ffbefbdbb175\") " pod="cinder-kuttl-tests/keystone-431c-account-create-update-4fcmg" Jan 30 22:00:30 crc kubenswrapper[4869]: I0130 22:00:30.005579 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgg8r\" (UniqueName: \"kubernetes.io/projected/3f76026f-41ae-4897-a348-ac4c49c2c2c5-kube-api-access-dgg8r\") pod \"keystone-db-create-2wgdn\" (UID: \"3f76026f-41ae-4897-a348-ac4c49c2c2c5\") " pod="cinder-kuttl-tests/keystone-db-create-2wgdn" Jan 30 22:00:30 crc kubenswrapper[4869]: I0130 22:00:30.005603 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f76026f-41ae-4897-a348-ac4c49c2c2c5-operator-scripts\") pod \"keystone-db-create-2wgdn\" (UID: \"3f76026f-41ae-4897-a348-ac4c49c2c2c5\") " pod="cinder-kuttl-tests/keystone-db-create-2wgdn" Jan 30 22:00:30 crc kubenswrapper[4869]: I0130 22:00:30.005670 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ks69\" (UniqueName: \"kubernetes.io/projected/448717a9-d5d0-47dc-9b41-ffbefbdbb175-kube-api-access-9ks69\") pod \"keystone-431c-account-create-update-4fcmg\" (UID: \"448717a9-d5d0-47dc-9b41-ffbefbdbb175\") " pod="cinder-kuttl-tests/keystone-431c-account-create-update-4fcmg" Jan 30 22:00:30 crc kubenswrapper[4869]: I0130 22:00:30.006939 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f76026f-41ae-4897-a348-ac4c49c2c2c5-operator-scripts\") pod \"keystone-db-create-2wgdn\" (UID: \"3f76026f-41ae-4897-a348-ac4c49c2c2c5\") " pod="cinder-kuttl-tests/keystone-db-create-2wgdn" Jan 30 22:00:30 crc kubenswrapper[4869]: I0130 22:00:30.029742 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgg8r\" (UniqueName: \"kubernetes.io/projected/3f76026f-41ae-4897-a348-ac4c49c2c2c5-kube-api-access-dgg8r\") pod \"keystone-db-create-2wgdn\" (UID: \"3f76026f-41ae-4897-a348-ac4c49c2c2c5\") " pod="cinder-kuttl-tests/keystone-db-create-2wgdn" Jan 30 22:00:30 crc kubenswrapper[4869]: I0130 22:00:30.106513 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ks69\" (UniqueName: \"kubernetes.io/projected/448717a9-d5d0-47dc-9b41-ffbefbdbb175-kube-api-access-9ks69\") pod \"keystone-431c-account-create-update-4fcmg\" (UID: \"448717a9-d5d0-47dc-9b41-ffbefbdbb175\") " pod="cinder-kuttl-tests/keystone-431c-account-create-update-4fcmg" Jan 30 22:00:30 crc kubenswrapper[4869]: I0130 22:00:30.106581 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448717a9-d5d0-47dc-9b41-ffbefbdbb175-operator-scripts\") pod \"keystone-431c-account-create-update-4fcmg\" (UID: \"448717a9-d5d0-47dc-9b41-ffbefbdbb175\") " pod="cinder-kuttl-tests/keystone-431c-account-create-update-4fcmg" Jan 30 22:00:30 crc kubenswrapper[4869]: I0130 22:00:30.107380 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448717a9-d5d0-47dc-9b41-ffbefbdbb175-operator-scripts\") pod \"keystone-431c-account-create-update-4fcmg\" (UID: \"448717a9-d5d0-47dc-9b41-ffbefbdbb175\") " pod="cinder-kuttl-tests/keystone-431c-account-create-update-4fcmg" Jan 30 22:00:30 crc kubenswrapper[4869]: I0130 22:00:30.122518 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ks69\" (UniqueName: \"kubernetes.io/projected/448717a9-d5d0-47dc-9b41-ffbefbdbb175-kube-api-access-9ks69\") pod \"keystone-431c-account-create-update-4fcmg\" (UID: \"448717a9-d5d0-47dc-9b41-ffbefbdbb175\") " pod="cinder-kuttl-tests/keystone-431c-account-create-update-4fcmg" Jan 30 22:00:30 crc kubenswrapper[4869]: I0130 22:00:30.132794 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-create-2wgdn" Jan 30 22:00:30 crc kubenswrapper[4869]: I0130 22:00:30.250357 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-431c-account-create-update-4fcmg" Jan 30 22:00:30 crc kubenswrapper[4869]: I0130 22:00:30.549749 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-db-create-2wgdn"] Jan 30 22:00:30 crc kubenswrapper[4869]: I0130 22:00:30.617325 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-create-2wgdn" event={"ID":"3f76026f-41ae-4897-a348-ac4c49c2c2c5","Type":"ContainerStarted","Data":"5a78e5c4b896f94a5048ad17012f8340671557fb955be7ebb8404e339991054c"} Jan 30 22:00:30 crc kubenswrapper[4869]: I0130 22:00:30.954412 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-431c-account-create-update-4fcmg"] Jan 30 22:00:31 crc kubenswrapper[4869]: I0130 22:00:31.624705 4869 generic.go:334] "Generic (PLEG): container finished" podID="3f76026f-41ae-4897-a348-ac4c49c2c2c5" containerID="a59c1b925edfa20a4b13364d7a4e1a73764308a5622726b53bfb9a145ab327d7" exitCode=0 Jan 30 22:00:31 crc kubenswrapper[4869]: I0130 22:00:31.625023 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-create-2wgdn" event={"ID":"3f76026f-41ae-4897-a348-ac4c49c2c2c5","Type":"ContainerDied","Data":"a59c1b925edfa20a4b13364d7a4e1a73764308a5622726b53bfb9a145ab327d7"} Jan 30 22:00:31 crc kubenswrapper[4869]: I0130 22:00:31.626691 4869 generic.go:334] "Generic (PLEG): container finished" podID="448717a9-d5d0-47dc-9b41-ffbefbdbb175" containerID="a54c43c7b8fb82e6c2590299b83ff279e89f34ddacde303e3383dcb4ff22f03e" exitCode=0 Jan 30 22:00:31 crc kubenswrapper[4869]: I0130 22:00:31.626760 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-431c-account-create-update-4fcmg" event={"ID":"448717a9-d5d0-47dc-9b41-ffbefbdbb175","Type":"ContainerDied","Data":"a54c43c7b8fb82e6c2590299b83ff279e89f34ddacde303e3383dcb4ff22f03e"} Jan 30 22:00:31 crc kubenswrapper[4869]: I0130 22:00:31.626799 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-431c-account-create-update-4fcmg" event={"ID":"448717a9-d5d0-47dc-9b41-ffbefbdbb175","Type":"ContainerStarted","Data":"053c6ea90df306c6794468ba10878ce473678c7779c9b25ef13dbba4a39c3ec7"} Jan 30 22:00:31 crc kubenswrapper[4869]: I0130 22:00:31.991013 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:00:31 crc kubenswrapper[4869]: I0130 22:00:31.991088 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:00:32 crc kubenswrapper[4869]: I0130 22:00:32.562393 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-index-6ggmv"] Jan 30 22:00:32 crc kubenswrapper[4869]: I0130 22:00:32.563423 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-index-6ggmv" Jan 30 22:00:32 crc kubenswrapper[4869]: I0130 22:00:32.566388 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-index-dockercfg-rbzhs" Jan 30 22:00:32 crc kubenswrapper[4869]: I0130 22:00:32.571446 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-index-6ggmv"] Jan 30 22:00:32 crc kubenswrapper[4869]: I0130 22:00:32.649255 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pnvt\" (UniqueName: \"kubernetes.io/projected/8c7d59e0-6997-436f-b17d-67e8d1c0f319-kube-api-access-2pnvt\") pod \"cinder-operator-index-6ggmv\" (UID: \"8c7d59e0-6997-436f-b17d-67e8d1c0f319\") " pod="openstack-operators/cinder-operator-index-6ggmv" Jan 30 22:00:32 crc kubenswrapper[4869]: I0130 22:00:32.750999 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pnvt\" (UniqueName: \"kubernetes.io/projected/8c7d59e0-6997-436f-b17d-67e8d1c0f319-kube-api-access-2pnvt\") pod \"cinder-operator-index-6ggmv\" (UID: \"8c7d59e0-6997-436f-b17d-67e8d1c0f319\") " pod="openstack-operators/cinder-operator-index-6ggmv" Jan 30 22:00:32 crc kubenswrapper[4869]: I0130 22:00:32.772411 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pnvt\" (UniqueName: \"kubernetes.io/projected/8c7d59e0-6997-436f-b17d-67e8d1c0f319-kube-api-access-2pnvt\") pod \"cinder-operator-index-6ggmv\" (UID: \"8c7d59e0-6997-436f-b17d-67e8d1c0f319\") " pod="openstack-operators/cinder-operator-index-6ggmv" Jan 30 22:00:32 crc kubenswrapper[4869]: I0130 22:00:32.898911 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-index-6ggmv" Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.025032 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-create-2wgdn" Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.030063 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-431c-account-create-update-4fcmg" Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.160635 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f76026f-41ae-4897-a348-ac4c49c2c2c5-operator-scripts\") pod \"3f76026f-41ae-4897-a348-ac4c49c2c2c5\" (UID: \"3f76026f-41ae-4897-a348-ac4c49c2c2c5\") " Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.160695 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448717a9-d5d0-47dc-9b41-ffbefbdbb175-operator-scripts\") pod \"448717a9-d5d0-47dc-9b41-ffbefbdbb175\" (UID: \"448717a9-d5d0-47dc-9b41-ffbefbdbb175\") " Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.160744 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgg8r\" (UniqueName: \"kubernetes.io/projected/3f76026f-41ae-4897-a348-ac4c49c2c2c5-kube-api-access-dgg8r\") pod \"3f76026f-41ae-4897-a348-ac4c49c2c2c5\" (UID: \"3f76026f-41ae-4897-a348-ac4c49c2c2c5\") " Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.160809 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ks69\" (UniqueName: \"kubernetes.io/projected/448717a9-d5d0-47dc-9b41-ffbefbdbb175-kube-api-access-9ks69\") pod \"448717a9-d5d0-47dc-9b41-ffbefbdbb175\" (UID: \"448717a9-d5d0-47dc-9b41-ffbefbdbb175\") " Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.161658 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f76026f-41ae-4897-a348-ac4c49c2c2c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f76026f-41ae-4897-a348-ac4c49c2c2c5" (UID: "3f76026f-41ae-4897-a348-ac4c49c2c2c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.161657 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/448717a9-d5d0-47dc-9b41-ffbefbdbb175-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "448717a9-d5d0-47dc-9b41-ffbefbdbb175" (UID: "448717a9-d5d0-47dc-9b41-ffbefbdbb175"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.168145 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f76026f-41ae-4897-a348-ac4c49c2c2c5-kube-api-access-dgg8r" (OuterVolumeSpecName: "kube-api-access-dgg8r") pod "3f76026f-41ae-4897-a348-ac4c49c2c2c5" (UID: "3f76026f-41ae-4897-a348-ac4c49c2c2c5"). InnerVolumeSpecName "kube-api-access-dgg8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.172083 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/448717a9-d5d0-47dc-9b41-ffbefbdbb175-kube-api-access-9ks69" (OuterVolumeSpecName: "kube-api-access-9ks69") pod "448717a9-d5d0-47dc-9b41-ffbefbdbb175" (UID: "448717a9-d5d0-47dc-9b41-ffbefbdbb175"). InnerVolumeSpecName "kube-api-access-9ks69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.262235 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ks69\" (UniqueName: \"kubernetes.io/projected/448717a9-d5d0-47dc-9b41-ffbefbdbb175-kube-api-access-9ks69\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.262274 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f76026f-41ae-4897-a348-ac4c49c2c2c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.262289 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448717a9-d5d0-47dc-9b41-ffbefbdbb175-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.262301 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgg8r\" (UniqueName: \"kubernetes.io/projected/3f76026f-41ae-4897-a348-ac4c49c2c2c5-kube-api-access-dgg8r\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.342093 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-index-6ggmv"] Jan 30 22:00:33 crc kubenswrapper[4869]: W0130 22:00:33.356769 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c7d59e0_6997_436f_b17d_67e8d1c0f319.slice/crio-bad2f4d7ab34c4041df5d9cd153e4f0081a69490b6acd4dcb99897c489726d77 WatchSource:0}: Error finding container bad2f4d7ab34c4041df5d9cd153e4f0081a69490b6acd4dcb99897c489726d77: Status 404 returned error can't find the container with id bad2f4d7ab34c4041df5d9cd153e4f0081a69490b6acd4dcb99897c489726d77 Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.643463 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-create-2wgdn" event={"ID":"3f76026f-41ae-4897-a348-ac4c49c2c2c5","Type":"ContainerDied","Data":"5a78e5c4b896f94a5048ad17012f8340671557fb955be7ebb8404e339991054c"} Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.643502 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-create-2wgdn" Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.643546 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a78e5c4b896f94a5048ad17012f8340671557fb955be7ebb8404e339991054c" Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.646033 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-431c-account-create-update-4fcmg" event={"ID":"448717a9-d5d0-47dc-9b41-ffbefbdbb175","Type":"ContainerDied","Data":"053c6ea90df306c6794468ba10878ce473678c7779c9b25ef13dbba4a39c3ec7"} Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.646066 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="053c6ea90df306c6794468ba10878ce473678c7779c9b25ef13dbba4a39c3ec7" Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.646065 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-431c-account-create-update-4fcmg" Jan 30 22:00:33 crc kubenswrapper[4869]: I0130 22:00:33.647993 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-index-6ggmv" event={"ID":"8c7d59e0-6997-436f-b17d-67e8d1c0f319","Type":"ContainerStarted","Data":"bad2f4d7ab34c4041df5d9cd153e4f0081a69490b6acd4dcb99897c489726d77"} Jan 30 22:00:36 crc kubenswrapper[4869]: I0130 22:00:36.670005 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-index-6ggmv" event={"ID":"8c7d59e0-6997-436f-b17d-67e8d1c0f319","Type":"ContainerStarted","Data":"034489ba30fbae8c9a8b4d8c6a003b0f6087ad084656ae62876b438e79e2c1a1"} Jan 30 22:00:36 crc kubenswrapper[4869]: I0130 22:00:36.692701 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-index-6ggmv" podStartSLOduration=2.519237072 podStartE2EDuration="4.692674478s" podCreationTimestamp="2026-01-30 22:00:32 +0000 UTC" firstStartedPulling="2026-01-30 22:00:33.359077882 +0000 UTC m=+1034.244835907" lastFinishedPulling="2026-01-30 22:00:35.532515288 +0000 UTC m=+1036.418273313" observedRunningTime="2026-01-30 22:00:36.687720111 +0000 UTC m=+1037.573478136" watchObservedRunningTime="2026-01-30 22:00:36.692674478 +0000 UTC m=+1037.578432503" Jan 30 22:00:40 crc kubenswrapper[4869]: I0130 22:00:40.637031 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 22:00:41 crc kubenswrapper[4869]: I0130 22:00:41.314873 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-vmm9m"] Jan 30 22:00:41 crc kubenswrapper[4869]: E0130 22:00:41.315499 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448717a9-d5d0-47dc-9b41-ffbefbdbb175" containerName="mariadb-account-create-update" Jan 30 22:00:41 crc kubenswrapper[4869]: I0130 22:00:41.315516 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="448717a9-d5d0-47dc-9b41-ffbefbdbb175" containerName="mariadb-account-create-update" Jan 30 22:00:41 crc kubenswrapper[4869]: E0130 22:00:41.315534 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f76026f-41ae-4897-a348-ac4c49c2c2c5" containerName="mariadb-database-create" Jan 30 22:00:41 crc kubenswrapper[4869]: I0130 22:00:41.315543 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f76026f-41ae-4897-a348-ac4c49c2c2c5" containerName="mariadb-database-create" Jan 30 22:00:41 crc kubenswrapper[4869]: I0130 22:00:41.315690 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f76026f-41ae-4897-a348-ac4c49c2c2c5" containerName="mariadb-database-create" Jan 30 22:00:41 crc kubenswrapper[4869]: I0130 22:00:41.315714 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="448717a9-d5d0-47dc-9b41-ffbefbdbb175" containerName="mariadb-account-create-update" Jan 30 22:00:41 crc kubenswrapper[4869]: I0130 22:00:41.316264 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-sync-vmm9m" Jan 30 22:00:41 crc kubenswrapper[4869]: I0130 22:00:41.318093 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-scripts" Jan 30 22:00:41 crc kubenswrapper[4869]: I0130 22:00:41.318311 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-config-data" Jan 30 22:00:41 crc kubenswrapper[4869]: I0130 22:00:41.320539 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone" Jan 30 22:00:41 crc kubenswrapper[4869]: I0130 22:00:41.320752 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-keystone-dockercfg-nmqws" Jan 30 22:00:41 crc kubenswrapper[4869]: I0130 22:00:41.331527 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-vmm9m"] Jan 30 22:00:41 crc kubenswrapper[4869]: I0130 22:00:41.384917 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/545292dd-58e7-4831-b0b9-02b1bf9621b3-config-data\") pod \"keystone-db-sync-vmm9m\" (UID: \"545292dd-58e7-4831-b0b9-02b1bf9621b3\") " pod="cinder-kuttl-tests/keystone-db-sync-vmm9m" Jan 30 22:00:41 crc kubenswrapper[4869]: I0130 22:00:41.384976 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jmqn\" (UniqueName: \"kubernetes.io/projected/545292dd-58e7-4831-b0b9-02b1bf9621b3-kube-api-access-6jmqn\") pod \"keystone-db-sync-vmm9m\" (UID: \"545292dd-58e7-4831-b0b9-02b1bf9621b3\") " pod="cinder-kuttl-tests/keystone-db-sync-vmm9m" Jan 30 22:00:41 crc kubenswrapper[4869]: I0130 22:00:41.486155 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/545292dd-58e7-4831-b0b9-02b1bf9621b3-config-data\") pod \"keystone-db-sync-vmm9m\" (UID: \"545292dd-58e7-4831-b0b9-02b1bf9621b3\") " pod="cinder-kuttl-tests/keystone-db-sync-vmm9m" Jan 30 22:00:41 crc kubenswrapper[4869]: I0130 22:00:41.486256 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jmqn\" (UniqueName: \"kubernetes.io/projected/545292dd-58e7-4831-b0b9-02b1bf9621b3-kube-api-access-6jmqn\") pod \"keystone-db-sync-vmm9m\" (UID: \"545292dd-58e7-4831-b0b9-02b1bf9621b3\") " pod="cinder-kuttl-tests/keystone-db-sync-vmm9m" Jan 30 22:00:41 crc kubenswrapper[4869]: I0130 22:00:41.496256 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/545292dd-58e7-4831-b0b9-02b1bf9621b3-config-data\") pod \"keystone-db-sync-vmm9m\" (UID: \"545292dd-58e7-4831-b0b9-02b1bf9621b3\") " pod="cinder-kuttl-tests/keystone-db-sync-vmm9m" Jan 30 22:00:41 crc kubenswrapper[4869]: I0130 22:00:41.509534 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jmqn\" (UniqueName: \"kubernetes.io/projected/545292dd-58e7-4831-b0b9-02b1bf9621b3-kube-api-access-6jmqn\") pod \"keystone-db-sync-vmm9m\" (UID: \"545292dd-58e7-4831-b0b9-02b1bf9621b3\") " pod="cinder-kuttl-tests/keystone-db-sync-vmm9m" Jan 30 22:00:41 crc kubenswrapper[4869]: I0130 22:00:41.639272 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-sync-vmm9m" Jan 30 22:00:42 crc kubenswrapper[4869]: I0130 22:00:42.078667 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-vmm9m"] Jan 30 22:00:42 crc kubenswrapper[4869]: W0130 22:00:42.086052 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod545292dd_58e7_4831_b0b9_02b1bf9621b3.slice/crio-d3d6c7467204dc8ec1d7c1aed472b9506d98ba6665d0647b6742d2f348222fb6 WatchSource:0}: Error finding container d3d6c7467204dc8ec1d7c1aed472b9506d98ba6665d0647b6742d2f348222fb6: Status 404 returned error can't find the container with id d3d6c7467204dc8ec1d7c1aed472b9506d98ba6665d0647b6742d2f348222fb6 Jan 30 22:00:42 crc kubenswrapper[4869]: I0130 22:00:42.712356 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-sync-vmm9m" event={"ID":"545292dd-58e7-4831-b0b9-02b1bf9621b3","Type":"ContainerStarted","Data":"d3d6c7467204dc8ec1d7c1aed472b9506d98ba6665d0647b6742d2f348222fb6"} Jan 30 22:00:42 crc kubenswrapper[4869]: I0130 22:00:42.900767 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/cinder-operator-index-6ggmv" Jan 30 22:00:42 crc kubenswrapper[4869]: I0130 22:00:42.900806 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-index-6ggmv" Jan 30 22:00:42 crc kubenswrapper[4869]: I0130 22:00:42.947589 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/cinder-operator-index-6ggmv" Jan 30 22:00:43 crc kubenswrapper[4869]: I0130 22:00:43.759460 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-index-6ggmv" Jan 30 22:00:44 crc kubenswrapper[4869]: I0130 22:00:44.987566 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7"] Jan 30 22:00:44 crc kubenswrapper[4869]: I0130 22:00:44.991773 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7" Jan 30 22:00:45 crc kubenswrapper[4869]: I0130 22:00:44.997390 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jpqpq" Jan 30 22:00:45 crc kubenswrapper[4869]: I0130 22:00:45.000288 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7"] Jan 30 22:00:45 crc kubenswrapper[4869]: I0130 22:00:45.038447 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50ef1a66-b644-41b1-90cd-0f64a5628e97-util\") pod \"e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7\" (UID: \"50ef1a66-b644-41b1-90cd-0f64a5628e97\") " pod="openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7" Jan 30 22:00:45 crc kubenswrapper[4869]: I0130 22:00:45.038525 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stptr\" (UniqueName: \"kubernetes.io/projected/50ef1a66-b644-41b1-90cd-0f64a5628e97-kube-api-access-stptr\") pod \"e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7\" (UID: \"50ef1a66-b644-41b1-90cd-0f64a5628e97\") " pod="openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7" Jan 30 22:00:45 crc kubenswrapper[4869]: I0130 22:00:45.038554 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50ef1a66-b644-41b1-90cd-0f64a5628e97-bundle\") pod \"e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7\" (UID: \"50ef1a66-b644-41b1-90cd-0f64a5628e97\") " pod="openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7" Jan 30 22:00:45 crc kubenswrapper[4869]: I0130 22:00:45.139595 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50ef1a66-b644-41b1-90cd-0f64a5628e97-util\") pod \"e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7\" (UID: \"50ef1a66-b644-41b1-90cd-0f64a5628e97\") " pod="openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7" Jan 30 22:00:45 crc kubenswrapper[4869]: I0130 22:00:45.139698 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stptr\" (UniqueName: \"kubernetes.io/projected/50ef1a66-b644-41b1-90cd-0f64a5628e97-kube-api-access-stptr\") pod \"e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7\" (UID: \"50ef1a66-b644-41b1-90cd-0f64a5628e97\") " pod="openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7" Jan 30 22:00:45 crc kubenswrapper[4869]: I0130 22:00:45.139737 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50ef1a66-b644-41b1-90cd-0f64a5628e97-bundle\") pod \"e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7\" (UID: \"50ef1a66-b644-41b1-90cd-0f64a5628e97\") " pod="openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7" Jan 30 22:00:45 crc kubenswrapper[4869]: I0130 22:00:45.140215 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50ef1a66-b644-41b1-90cd-0f64a5628e97-bundle\") pod \"e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7\" (UID: \"50ef1a66-b644-41b1-90cd-0f64a5628e97\") " pod="openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7" Jan 30 22:00:45 crc kubenswrapper[4869]: I0130 22:00:45.140246 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50ef1a66-b644-41b1-90cd-0f64a5628e97-util\") pod \"e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7\" (UID: \"50ef1a66-b644-41b1-90cd-0f64a5628e97\") " pod="openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7" Jan 30 22:00:45 crc kubenswrapper[4869]: I0130 22:00:45.158968 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stptr\" (UniqueName: \"kubernetes.io/projected/50ef1a66-b644-41b1-90cd-0f64a5628e97-kube-api-access-stptr\") pod \"e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7\" (UID: \"50ef1a66-b644-41b1-90cd-0f64a5628e97\") " pod="openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7" Jan 30 22:00:45 crc kubenswrapper[4869]: I0130 22:00:45.341251 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7" Jan 30 22:00:52 crc kubenswrapper[4869]: I0130 22:00:52.094805 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7"] Jan 30 22:00:52 crc kubenswrapper[4869]: W0130 22:00:52.105986 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50ef1a66_b644_41b1_90cd_0f64a5628e97.slice/crio-fec2d87466e5b38d780325f529ff24d68098ed0754651849e04d0671a0cfe848 WatchSource:0}: Error finding container fec2d87466e5b38d780325f529ff24d68098ed0754651849e04d0671a0cfe848: Status 404 returned error can't find the container with id fec2d87466e5b38d780325f529ff24d68098ed0754651849e04d0671a0cfe848 Jan 30 22:00:52 crc kubenswrapper[4869]: I0130 22:00:52.770989 4869 generic.go:334] "Generic (PLEG): container finished" podID="50ef1a66-b644-41b1-90cd-0f64a5628e97" containerID="2ed74e5453c0e8a73deb9cc35bac1c64e50e0f8e215edd599f389d88297337af" exitCode=0 Jan 30 22:00:52 crc kubenswrapper[4869]: I0130 22:00:52.771047 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7" event={"ID":"50ef1a66-b644-41b1-90cd-0f64a5628e97","Type":"ContainerDied","Data":"2ed74e5453c0e8a73deb9cc35bac1c64e50e0f8e215edd599f389d88297337af"} Jan 30 22:00:52 crc kubenswrapper[4869]: I0130 22:00:52.771079 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7" event={"ID":"50ef1a66-b644-41b1-90cd-0f64a5628e97","Type":"ContainerStarted","Data":"fec2d87466e5b38d780325f529ff24d68098ed0754651849e04d0671a0cfe848"} Jan 30 22:00:52 crc kubenswrapper[4869]: I0130 22:00:52.773788 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-sync-vmm9m" event={"ID":"545292dd-58e7-4831-b0b9-02b1bf9621b3","Type":"ContainerStarted","Data":"7889bc5a6aa75dbc449011f470d4c3f678a205cf9a7327b47403ac461525a781"} Jan 30 22:00:52 crc kubenswrapper[4869]: I0130 22:00:52.805041 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/keystone-db-sync-vmm9m" podStartSLOduration=1.924047498 podStartE2EDuration="11.805015041s" podCreationTimestamp="2026-01-30 22:00:41 +0000 UTC" firstStartedPulling="2026-01-30 22:00:42.087575118 +0000 UTC m=+1042.973333143" lastFinishedPulling="2026-01-30 22:00:51.968542661 +0000 UTC m=+1052.854300686" observedRunningTime="2026-01-30 22:00:52.801308825 +0000 UTC m=+1053.687066850" watchObservedRunningTime="2026-01-30 22:00:52.805015041 +0000 UTC m=+1053.690773066" Jan 30 22:00:53 crc kubenswrapper[4869]: I0130 22:00:53.781410 4869 generic.go:334] "Generic (PLEG): container finished" podID="50ef1a66-b644-41b1-90cd-0f64a5628e97" containerID="e151a62b18056d6f7ec07e05338c6bce5afb27a9a3710a00f7e4ad8332cdbca3" exitCode=0 Jan 30 22:00:53 crc kubenswrapper[4869]: I0130 22:00:53.781497 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7" event={"ID":"50ef1a66-b644-41b1-90cd-0f64a5628e97","Type":"ContainerDied","Data":"e151a62b18056d6f7ec07e05338c6bce5afb27a9a3710a00f7e4ad8332cdbca3"} Jan 30 22:00:54 crc kubenswrapper[4869]: I0130 22:00:54.791995 4869 generic.go:334] "Generic (PLEG): container finished" podID="50ef1a66-b644-41b1-90cd-0f64a5628e97" containerID="b8992e03162149a5d8af20dd1efd5d35b16b03ca5a907a456d80341f03472e03" exitCode=0 Jan 30 22:00:54 crc kubenswrapper[4869]: I0130 22:00:54.792095 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7" event={"ID":"50ef1a66-b644-41b1-90cd-0f64a5628e97","Type":"ContainerDied","Data":"b8992e03162149a5d8af20dd1efd5d35b16b03ca5a907a456d80341f03472e03"} Jan 30 22:00:55 crc kubenswrapper[4869]: I0130 22:00:55.799357 4869 generic.go:334] "Generic (PLEG): container finished" podID="545292dd-58e7-4831-b0b9-02b1bf9621b3" containerID="7889bc5a6aa75dbc449011f470d4c3f678a205cf9a7327b47403ac461525a781" exitCode=0 Jan 30 22:00:55 crc kubenswrapper[4869]: I0130 22:00:55.799461 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-sync-vmm9m" event={"ID":"545292dd-58e7-4831-b0b9-02b1bf9621b3","Type":"ContainerDied","Data":"7889bc5a6aa75dbc449011f470d4c3f678a205cf9a7327b47403ac461525a781"} Jan 30 22:00:56 crc kubenswrapper[4869]: I0130 22:00:56.087086 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7" Jan 30 22:00:56 crc kubenswrapper[4869]: I0130 22:00:56.120615 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stptr\" (UniqueName: \"kubernetes.io/projected/50ef1a66-b644-41b1-90cd-0f64a5628e97-kube-api-access-stptr\") pod \"50ef1a66-b644-41b1-90cd-0f64a5628e97\" (UID: \"50ef1a66-b644-41b1-90cd-0f64a5628e97\") " Jan 30 22:00:56 crc kubenswrapper[4869]: I0130 22:00:56.120682 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50ef1a66-b644-41b1-90cd-0f64a5628e97-util\") pod \"50ef1a66-b644-41b1-90cd-0f64a5628e97\" (UID: \"50ef1a66-b644-41b1-90cd-0f64a5628e97\") " Jan 30 22:00:56 crc kubenswrapper[4869]: I0130 22:00:56.120740 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50ef1a66-b644-41b1-90cd-0f64a5628e97-bundle\") pod \"50ef1a66-b644-41b1-90cd-0f64a5628e97\" (UID: \"50ef1a66-b644-41b1-90cd-0f64a5628e97\") " Jan 30 22:00:56 crc kubenswrapper[4869]: I0130 22:00:56.122140 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50ef1a66-b644-41b1-90cd-0f64a5628e97-bundle" (OuterVolumeSpecName: "bundle") pod "50ef1a66-b644-41b1-90cd-0f64a5628e97" (UID: "50ef1a66-b644-41b1-90cd-0f64a5628e97"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:00:56 crc kubenswrapper[4869]: I0130 22:00:56.126328 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ef1a66-b644-41b1-90cd-0f64a5628e97-kube-api-access-stptr" (OuterVolumeSpecName: "kube-api-access-stptr") pod "50ef1a66-b644-41b1-90cd-0f64a5628e97" (UID: "50ef1a66-b644-41b1-90cd-0f64a5628e97"). InnerVolumeSpecName "kube-api-access-stptr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:56 crc kubenswrapper[4869]: I0130 22:00:56.137671 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50ef1a66-b644-41b1-90cd-0f64a5628e97-util" (OuterVolumeSpecName: "util") pod "50ef1a66-b644-41b1-90cd-0f64a5628e97" (UID: "50ef1a66-b644-41b1-90cd-0f64a5628e97"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:00:56 crc kubenswrapper[4869]: I0130 22:00:56.222537 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stptr\" (UniqueName: \"kubernetes.io/projected/50ef1a66-b644-41b1-90cd-0f64a5628e97-kube-api-access-stptr\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:56 crc kubenswrapper[4869]: I0130 22:00:56.222579 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50ef1a66-b644-41b1-90cd-0f64a5628e97-util\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:56 crc kubenswrapper[4869]: I0130 22:00:56.222590 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50ef1a66-b644-41b1-90cd-0f64a5628e97-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:56 crc kubenswrapper[4869]: I0130 22:00:56.809128 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7" event={"ID":"50ef1a66-b644-41b1-90cd-0f64a5628e97","Type":"ContainerDied","Data":"fec2d87466e5b38d780325f529ff24d68098ed0754651849e04d0671a0cfe848"} Jan 30 22:00:56 crc kubenswrapper[4869]: I0130 22:00:56.809185 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fec2d87466e5b38d780325f529ff24d68098ed0754651849e04d0671a0cfe848" Jan 30 22:00:56 crc kubenswrapper[4869]: I0130 22:00:56.809148 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7" Jan 30 22:00:57 crc kubenswrapper[4869]: I0130 22:00:57.112140 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-sync-vmm9m" Jan 30 22:00:57 crc kubenswrapper[4869]: I0130 22:00:57.234972 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jmqn\" (UniqueName: \"kubernetes.io/projected/545292dd-58e7-4831-b0b9-02b1bf9621b3-kube-api-access-6jmqn\") pod \"545292dd-58e7-4831-b0b9-02b1bf9621b3\" (UID: \"545292dd-58e7-4831-b0b9-02b1bf9621b3\") " Jan 30 22:00:57 crc kubenswrapper[4869]: I0130 22:00:57.235115 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/545292dd-58e7-4831-b0b9-02b1bf9621b3-config-data\") pod \"545292dd-58e7-4831-b0b9-02b1bf9621b3\" (UID: \"545292dd-58e7-4831-b0b9-02b1bf9621b3\") " Jan 30 22:00:57 crc kubenswrapper[4869]: I0130 22:00:57.247784 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/545292dd-58e7-4831-b0b9-02b1bf9621b3-kube-api-access-6jmqn" (OuterVolumeSpecName: "kube-api-access-6jmqn") pod "545292dd-58e7-4831-b0b9-02b1bf9621b3" (UID: "545292dd-58e7-4831-b0b9-02b1bf9621b3"). InnerVolumeSpecName "kube-api-access-6jmqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:57 crc kubenswrapper[4869]: I0130 22:00:57.267511 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/545292dd-58e7-4831-b0b9-02b1bf9621b3-config-data" (OuterVolumeSpecName: "config-data") pod "545292dd-58e7-4831-b0b9-02b1bf9621b3" (UID: "545292dd-58e7-4831-b0b9-02b1bf9621b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:57 crc kubenswrapper[4869]: I0130 22:00:57.336965 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/545292dd-58e7-4831-b0b9-02b1bf9621b3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:57 crc kubenswrapper[4869]: I0130 22:00:57.336997 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jmqn\" (UniqueName: \"kubernetes.io/projected/545292dd-58e7-4831-b0b9-02b1bf9621b3-kube-api-access-6jmqn\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:57 crc kubenswrapper[4869]: I0130 22:00:57.823267 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-sync-vmm9m" event={"ID":"545292dd-58e7-4831-b0b9-02b1bf9621b3","Type":"ContainerDied","Data":"d3d6c7467204dc8ec1d7c1aed472b9506d98ba6665d0647b6742d2f348222fb6"} Jan 30 22:00:57 crc kubenswrapper[4869]: I0130 22:00:57.823319 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3d6c7467204dc8ec1d7c1aed472b9506d98ba6665d0647b6742d2f348222fb6" Jan 30 22:00:57 crc kubenswrapper[4869]: I0130 22:00:57.823434 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-sync-vmm9m" Jan 30 22:00:57 crc kubenswrapper[4869]: I0130 22:00:57.999258 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-jj9nl"] Jan 30 22:00:57 crc kubenswrapper[4869]: E0130 22:00:57.999487 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ef1a66-b644-41b1-90cd-0f64a5628e97" containerName="extract" Jan 30 22:00:57 crc kubenswrapper[4869]: I0130 22:00:57.999498 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ef1a66-b644-41b1-90cd-0f64a5628e97" containerName="extract" Jan 30 22:00:57 crc kubenswrapper[4869]: E0130 22:00:57.999507 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ef1a66-b644-41b1-90cd-0f64a5628e97" containerName="pull" Jan 30 22:00:57 crc kubenswrapper[4869]: I0130 22:00:57.999513 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ef1a66-b644-41b1-90cd-0f64a5628e97" containerName="pull" Jan 30 22:00:57 crc kubenswrapper[4869]: E0130 22:00:57.999538 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ef1a66-b644-41b1-90cd-0f64a5628e97" containerName="util" Jan 30 22:00:57 crc kubenswrapper[4869]: I0130 22:00:57.999546 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ef1a66-b644-41b1-90cd-0f64a5628e97" containerName="util" Jan 30 22:00:57 crc kubenswrapper[4869]: E0130 22:00:57.999557 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="545292dd-58e7-4831-b0b9-02b1bf9621b3" containerName="keystone-db-sync" Jan 30 22:00:57 crc kubenswrapper[4869]: I0130 22:00:57.999566 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="545292dd-58e7-4831-b0b9-02b1bf9621b3" containerName="keystone-db-sync" Jan 30 22:00:57 crc kubenswrapper[4869]: I0130 22:00:57.999669 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="545292dd-58e7-4831-b0b9-02b1bf9621b3" containerName="keystone-db-sync" Jan 30 22:00:57 crc kubenswrapper[4869]: I0130 22:00:57.999683 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="50ef1a66-b644-41b1-90cd-0f64a5628e97" containerName="extract" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.000073 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.001710 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-scripts" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.001946 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.002196 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-keystone-dockercfg-nmqws" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.002340 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"osp-secret" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.002560 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-config-data" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.018929 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-jj9nl"] Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.044493 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-config-data\") pod \"keystone-bootstrap-jj9nl\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.044541 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-scripts\") pod \"keystone-bootstrap-jj9nl\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.044780 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-credential-keys\") pod \"keystone-bootstrap-jj9nl\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.044886 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-fernet-keys\") pod \"keystone-bootstrap-jj9nl\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.045144 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jqqt\" (UniqueName: \"kubernetes.io/projected/2e9514e0-09ca-45de-b21e-d629fd81dc25-kube-api-access-8jqqt\") pod \"keystone-bootstrap-jj9nl\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.146269 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-config-data\") pod \"keystone-bootstrap-jj9nl\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.146314 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-scripts\") pod \"keystone-bootstrap-jj9nl\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.146417 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-credential-keys\") pod \"keystone-bootstrap-jj9nl\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.146446 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-fernet-keys\") pod \"keystone-bootstrap-jj9nl\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.146470 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jqqt\" (UniqueName: \"kubernetes.io/projected/2e9514e0-09ca-45de-b21e-d629fd81dc25-kube-api-access-8jqqt\") pod \"keystone-bootstrap-jj9nl\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.149542 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-scripts\") pod \"keystone-bootstrap-jj9nl\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.149754 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-fernet-keys\") pod \"keystone-bootstrap-jj9nl\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.150103 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-credential-keys\") pod \"keystone-bootstrap-jj9nl\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.151844 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-config-data\") pod \"keystone-bootstrap-jj9nl\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.162493 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jqqt\" (UniqueName: \"kubernetes.io/projected/2e9514e0-09ca-45de-b21e-d629fd81dc25-kube-api-access-8jqqt\") pod \"keystone-bootstrap-jj9nl\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.317265 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.718656 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-jj9nl"] Jan 30 22:00:58 crc kubenswrapper[4869]: I0130 22:00:58.832482 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" event={"ID":"2e9514e0-09ca-45de-b21e-d629fd81dc25","Type":"ContainerStarted","Data":"5827a2c377435d3c368d0fde6db7744e04940454622c2c81068173b75341bc33"} Jan 30 22:00:59 crc kubenswrapper[4869]: I0130 22:00:59.840964 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" event={"ID":"2e9514e0-09ca-45de-b21e-d629fd81dc25","Type":"ContainerStarted","Data":"1d21d7b187c75e362e31c6f333629885316f317de229c557269e00631d9659d8"} Jan 30 22:01:01 crc kubenswrapper[4869]: I0130 22:01:01.990533 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:01:01 crc kubenswrapper[4869]: I0130 22:01:01.990886 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:01:01 crc kubenswrapper[4869]: I0130 22:01:01.990949 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 22:01:01 crc kubenswrapper[4869]: I0130 22:01:01.991432 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6192238ff2265f4c3d1b0ce1e53fb7be4490d1247ceca7a0a3ab91e6567a4b90"} pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:01:01 crc kubenswrapper[4869]: I0130 22:01:01.991493 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" containerID="cri-o://6192238ff2265f4c3d1b0ce1e53fb7be4490d1247ceca7a0a3ab91e6567a4b90" gracePeriod=600 Jan 30 22:01:02 crc kubenswrapper[4869]: I0130 22:01:02.862643 4869 generic.go:334] "Generic (PLEG): container finished" podID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerID="6192238ff2265f4c3d1b0ce1e53fb7be4490d1247ceca7a0a3ab91e6567a4b90" exitCode=0 Jan 30 22:01:02 crc kubenswrapper[4869]: I0130 22:01:02.862736 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" event={"ID":"b6fc0664-5e80-440d-a6e8-4189cdf5c500","Type":"ContainerDied","Data":"6192238ff2265f4c3d1b0ce1e53fb7be4490d1247ceca7a0a3ab91e6567a4b90"} Jan 30 22:01:02 crc kubenswrapper[4869]: I0130 22:01:02.863624 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" event={"ID":"b6fc0664-5e80-440d-a6e8-4189cdf5c500","Type":"ContainerStarted","Data":"fb268ebdb9eedb26a9652217f7a5aa752de4c3f089acc9c91036b9bb0160a969"} Jan 30 22:01:02 crc kubenswrapper[4869]: I0130 22:01:02.863654 4869 scope.go:117] "RemoveContainer" containerID="a7ff76a93ea10c54b4c308e0ca79595cb658f2169be1db3ae81fc5f671455e21" Jan 30 22:01:02 crc kubenswrapper[4869]: I0130 22:01:02.865858 4869 generic.go:334] "Generic (PLEG): container finished" podID="2e9514e0-09ca-45de-b21e-d629fd81dc25" containerID="1d21d7b187c75e362e31c6f333629885316f317de229c557269e00631d9659d8" exitCode=0 Jan 30 22:01:02 crc kubenswrapper[4869]: I0130 22:01:02.865915 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" event={"ID":"2e9514e0-09ca-45de-b21e-d629fd81dc25","Type":"ContainerDied","Data":"1d21d7b187c75e362e31c6f333629885316f317de229c557269e00631d9659d8"} Jan 30 22:01:02 crc kubenswrapper[4869]: I0130 22:01:02.891108 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" podStartSLOduration=5.891086191 podStartE2EDuration="5.891086191s" podCreationTimestamp="2026-01-30 22:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:00:59.863296001 +0000 UTC m=+1060.749054026" watchObservedRunningTime="2026-01-30 22:01:02.891086191 +0000 UTC m=+1063.776844216" Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.179106 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.326397 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jqqt\" (UniqueName: \"kubernetes.io/projected/2e9514e0-09ca-45de-b21e-d629fd81dc25-kube-api-access-8jqqt\") pod \"2e9514e0-09ca-45de-b21e-d629fd81dc25\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.326456 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-credential-keys\") pod \"2e9514e0-09ca-45de-b21e-d629fd81dc25\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.326518 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-config-data\") pod \"2e9514e0-09ca-45de-b21e-d629fd81dc25\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.326575 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-fernet-keys\") pod \"2e9514e0-09ca-45de-b21e-d629fd81dc25\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.326606 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-scripts\") pod \"2e9514e0-09ca-45de-b21e-d629fd81dc25\" (UID: \"2e9514e0-09ca-45de-b21e-d629fd81dc25\") " Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.335053 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2e9514e0-09ca-45de-b21e-d629fd81dc25" (UID: "2e9514e0-09ca-45de-b21e-d629fd81dc25"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.336746 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e9514e0-09ca-45de-b21e-d629fd81dc25-kube-api-access-8jqqt" (OuterVolumeSpecName: "kube-api-access-8jqqt") pod "2e9514e0-09ca-45de-b21e-d629fd81dc25" (UID: "2e9514e0-09ca-45de-b21e-d629fd81dc25"). InnerVolumeSpecName "kube-api-access-8jqqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.345143 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-scripts" (OuterVolumeSpecName: "scripts") pod "2e9514e0-09ca-45de-b21e-d629fd81dc25" (UID: "2e9514e0-09ca-45de-b21e-d629fd81dc25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.354130 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2e9514e0-09ca-45de-b21e-d629fd81dc25" (UID: "2e9514e0-09ca-45de-b21e-d629fd81dc25"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.363065 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-config-data" (OuterVolumeSpecName: "config-data") pod "2e9514e0-09ca-45de-b21e-d629fd81dc25" (UID: "2e9514e0-09ca-45de-b21e-d629fd81dc25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.428800 4869 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.428914 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.428938 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jqqt\" (UniqueName: \"kubernetes.io/projected/2e9514e0-09ca-45de-b21e-d629fd81dc25-kube-api-access-8jqqt\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.428952 4869 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.428964 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9514e0-09ca-45de-b21e-d629fd81dc25-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.881939 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" event={"ID":"2e9514e0-09ca-45de-b21e-d629fd81dc25","Type":"ContainerDied","Data":"5827a2c377435d3c368d0fde6db7744e04940454622c2c81068173b75341bc33"} Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.882267 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5827a2c377435d3c368d0fde6db7744e04940454622c2c81068173b75341bc33" Jan 30 22:01:04 crc kubenswrapper[4869]: I0130 22:01:04.881987 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-bootstrap-jj9nl" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.090344 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-5b988b97cc-bzpmz"] Jan 30 22:01:05 crc kubenswrapper[4869]: E0130 22:01:05.090711 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9514e0-09ca-45de-b21e-d629fd81dc25" containerName="keystone-bootstrap" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.090735 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9514e0-09ca-45de-b21e-d629fd81dc25" containerName="keystone-bootstrap" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.090869 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9514e0-09ca-45de-b21e-d629fd81dc25" containerName="keystone-bootstrap" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.091403 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.094096 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-scripts" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.094153 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.095480 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-config-data" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.095747 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-keystone-dockercfg-nmqws" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.108101 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-5b988b97cc-bzpmz"] Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.240083 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-credential-keys\") pod \"keystone-5b988b97cc-bzpmz\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.240171 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-config-data\") pod \"keystone-5b988b97cc-bzpmz\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.240197 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc68w\" (UniqueName: \"kubernetes.io/projected/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-kube-api-access-qc68w\") pod \"keystone-5b988b97cc-bzpmz\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.240224 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-fernet-keys\") pod \"keystone-5b988b97cc-bzpmz\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.240299 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-scripts\") pod \"keystone-5b988b97cc-bzpmz\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.341491 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-config-data\") pod \"keystone-5b988b97cc-bzpmz\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.341546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc68w\" (UniqueName: \"kubernetes.io/projected/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-kube-api-access-qc68w\") pod \"keystone-5b988b97cc-bzpmz\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.341571 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-fernet-keys\") pod \"keystone-5b988b97cc-bzpmz\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.341649 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-scripts\") pod \"keystone-5b988b97cc-bzpmz\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.341679 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-credential-keys\") pod \"keystone-5b988b97cc-bzpmz\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.346821 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-fernet-keys\") pod \"keystone-5b988b97cc-bzpmz\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.347309 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-scripts\") pod \"keystone-5b988b97cc-bzpmz\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.348333 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-credential-keys\") pod \"keystone-5b988b97cc-bzpmz\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.349593 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-config-data\") pod \"keystone-5b988b97cc-bzpmz\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.361271 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc68w\" (UniqueName: \"kubernetes.io/projected/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-kube-api-access-qc68w\") pod \"keystone-5b988b97cc-bzpmz\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.454152 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:05 crc kubenswrapper[4869]: I0130 22:01:05.885913 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-5b988b97cc-bzpmz"] Jan 30 22:01:05 crc kubenswrapper[4869]: W0130 22:01:05.889136 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44dcbfe1_c9ef_44ec_b14f_0c3d1afe14ce.slice/crio-123ffe81d0938e9c86d4b2c1300b04e0b2f0639501d16e0370f3aff48f9c178d WatchSource:0}: Error finding container 123ffe81d0938e9c86d4b2c1300b04e0b2f0639501d16e0370f3aff48f9c178d: Status 404 returned error can't find the container with id 123ffe81d0938e9c86d4b2c1300b04e0b2f0639501d16e0370f3aff48f9c178d Jan 30 22:01:06 crc kubenswrapper[4869]: I0130 22:01:06.901657 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" event={"ID":"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce","Type":"ContainerStarted","Data":"75f1036fa6f0f1ff3b7576bfb9f5e9ac887e0a12fa3839de2963ceac60f4410e"} Jan 30 22:01:06 crc kubenswrapper[4869]: I0130 22:01:06.902282 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" event={"ID":"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce","Type":"ContainerStarted","Data":"123ffe81d0938e9c86d4b2c1300b04e0b2f0639501d16e0370f3aff48f9c178d"} Jan 30 22:01:06 crc kubenswrapper[4869]: I0130 22:01:06.903149 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:06 crc kubenswrapper[4869]: I0130 22:01:06.929493 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" podStartSLOduration=1.9294608100000001 podStartE2EDuration="1.92946081s" podCreationTimestamp="2026-01-30 22:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:06.924525575 +0000 UTC m=+1067.810283620" watchObservedRunningTime="2026-01-30 22:01:06.92946081 +0000 UTC m=+1067.815218835" Jan 30 22:01:09 crc kubenswrapper[4869]: I0130 22:01:09.827564 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql"] Jan 30 22:01:09 crc kubenswrapper[4869]: I0130 22:01:09.828941 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" Jan 30 22:01:09 crc kubenswrapper[4869]: I0130 22:01:09.840819 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-bg649" Jan 30 22:01:09 crc kubenswrapper[4869]: I0130 22:01:09.841115 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-service-cert" Jan 30 22:01:09 crc kubenswrapper[4869]: I0130 22:01:09.846763 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql"] Jan 30 22:01:10 crc kubenswrapper[4869]: I0130 22:01:10.013968 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13c3e212-4606-4375-96dd-b1fcf8a40d94-webhook-cert\") pod \"cinder-operator-controller-manager-64c8b49677-kb8ql\" (UID: \"13c3e212-4606-4375-96dd-b1fcf8a40d94\") " pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" Jan 30 22:01:10 crc kubenswrapper[4869]: I0130 22:01:10.014035 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13c3e212-4606-4375-96dd-b1fcf8a40d94-apiservice-cert\") pod \"cinder-operator-controller-manager-64c8b49677-kb8ql\" (UID: \"13c3e212-4606-4375-96dd-b1fcf8a40d94\") " pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" Jan 30 22:01:10 crc kubenswrapper[4869]: I0130 22:01:10.014067 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zprrd\" (UniqueName: \"kubernetes.io/projected/13c3e212-4606-4375-96dd-b1fcf8a40d94-kube-api-access-zprrd\") pod \"cinder-operator-controller-manager-64c8b49677-kb8ql\" (UID: \"13c3e212-4606-4375-96dd-b1fcf8a40d94\") " pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" Jan 30 22:01:10 crc kubenswrapper[4869]: I0130 22:01:10.115318 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13c3e212-4606-4375-96dd-b1fcf8a40d94-webhook-cert\") pod \"cinder-operator-controller-manager-64c8b49677-kb8ql\" (UID: \"13c3e212-4606-4375-96dd-b1fcf8a40d94\") " pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" Jan 30 22:01:10 crc kubenswrapper[4869]: I0130 22:01:10.115396 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13c3e212-4606-4375-96dd-b1fcf8a40d94-apiservice-cert\") pod \"cinder-operator-controller-manager-64c8b49677-kb8ql\" (UID: \"13c3e212-4606-4375-96dd-b1fcf8a40d94\") " pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" Jan 30 22:01:10 crc kubenswrapper[4869]: I0130 22:01:10.115425 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zprrd\" (UniqueName: \"kubernetes.io/projected/13c3e212-4606-4375-96dd-b1fcf8a40d94-kube-api-access-zprrd\") pod \"cinder-operator-controller-manager-64c8b49677-kb8ql\" (UID: \"13c3e212-4606-4375-96dd-b1fcf8a40d94\") " pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" Jan 30 22:01:10 crc kubenswrapper[4869]: I0130 22:01:10.121911 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13c3e212-4606-4375-96dd-b1fcf8a40d94-apiservice-cert\") pod \"cinder-operator-controller-manager-64c8b49677-kb8ql\" (UID: \"13c3e212-4606-4375-96dd-b1fcf8a40d94\") " pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" Jan 30 22:01:10 crc kubenswrapper[4869]: I0130 22:01:10.126445 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13c3e212-4606-4375-96dd-b1fcf8a40d94-webhook-cert\") pod \"cinder-operator-controller-manager-64c8b49677-kb8ql\" (UID: \"13c3e212-4606-4375-96dd-b1fcf8a40d94\") " pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" Jan 30 22:01:10 crc kubenswrapper[4869]: I0130 22:01:10.130186 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zprrd\" (UniqueName: \"kubernetes.io/projected/13c3e212-4606-4375-96dd-b1fcf8a40d94-kube-api-access-zprrd\") pod \"cinder-operator-controller-manager-64c8b49677-kb8ql\" (UID: \"13c3e212-4606-4375-96dd-b1fcf8a40d94\") " pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" Jan 30 22:01:10 crc kubenswrapper[4869]: I0130 22:01:10.153098 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" Jan 30 22:01:10 crc kubenswrapper[4869]: I0130 22:01:10.583277 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql"] Jan 30 22:01:10 crc kubenswrapper[4869]: I0130 22:01:10.592104 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:01:10 crc kubenswrapper[4869]: I0130 22:01:10.933849 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" event={"ID":"13c3e212-4606-4375-96dd-b1fcf8a40d94","Type":"ContainerStarted","Data":"16d78f623439d3d4f1e89a8dd6ee42c057b28b7af78097295ba0a93dbef92b0f"} Jan 30 22:01:12 crc kubenswrapper[4869]: I0130 22:01:12.960911 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" event={"ID":"13c3e212-4606-4375-96dd-b1fcf8a40d94","Type":"ContainerStarted","Data":"ede01769a5c417d7e27ca40d07173200b5707dbe5b97a96a586631e76bc31222"} Jan 30 22:01:12 crc kubenswrapper[4869]: I0130 22:01:12.961467 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" Jan 30 22:01:12 crc kubenswrapper[4869]: I0130 22:01:12.983535 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" podStartSLOduration=2.322274168 podStartE2EDuration="3.98351821s" podCreationTimestamp="2026-01-30 22:01:09 +0000 UTC" firstStartedPulling="2026-01-30 22:01:10.591787173 +0000 UTC m=+1071.477545198" lastFinishedPulling="2026-01-30 22:01:12.253031215 +0000 UTC m=+1073.138789240" observedRunningTime="2026-01-30 22:01:12.978686017 +0000 UTC m=+1073.864444032" watchObservedRunningTime="2026-01-30 22:01:12.98351821 +0000 UTC m=+1073.869276235" Jan 30 22:01:20 crc kubenswrapper[4869]: I0130 22:01:20.165458 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.020943 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-db-create-5zk6x"] Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.022255 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-5zk6x" Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.026701 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-3e67-account-create-update-zcf8m"] Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.027602 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-3e67-account-create-update-zcf8m" Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.036427 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-5zk6x"] Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.061741 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-db-secret" Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.087326 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-3e67-account-create-update-zcf8m"] Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.116587 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1d0d236-2728-44cf-b723-8de09f743481-operator-scripts\") pod \"cinder-db-create-5zk6x\" (UID: \"c1d0d236-2728-44cf-b723-8de09f743481\") " pod="cinder-kuttl-tests/cinder-db-create-5zk6x" Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.116666 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9hbf\" (UniqueName: \"kubernetes.io/projected/e5266478-3a00-466d-8e24-6a9e0d3edb29-kube-api-access-z9hbf\") pod \"cinder-3e67-account-create-update-zcf8m\" (UID: \"e5266478-3a00-466d-8e24-6a9e0d3edb29\") " pod="cinder-kuttl-tests/cinder-3e67-account-create-update-zcf8m" Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.116731 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5266478-3a00-466d-8e24-6a9e0d3edb29-operator-scripts\") pod \"cinder-3e67-account-create-update-zcf8m\" (UID: \"e5266478-3a00-466d-8e24-6a9e0d3edb29\") " pod="cinder-kuttl-tests/cinder-3e67-account-create-update-zcf8m" Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.116773 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvdrq\" (UniqueName: \"kubernetes.io/projected/c1d0d236-2728-44cf-b723-8de09f743481-kube-api-access-zvdrq\") pod \"cinder-db-create-5zk6x\" (UID: \"c1d0d236-2728-44cf-b723-8de09f743481\") " pod="cinder-kuttl-tests/cinder-db-create-5zk6x" Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.219320 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1d0d236-2728-44cf-b723-8de09f743481-operator-scripts\") pod \"cinder-db-create-5zk6x\" (UID: \"c1d0d236-2728-44cf-b723-8de09f743481\") " pod="cinder-kuttl-tests/cinder-db-create-5zk6x" Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.219764 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9hbf\" (UniqueName: \"kubernetes.io/projected/e5266478-3a00-466d-8e24-6a9e0d3edb29-kube-api-access-z9hbf\") pod \"cinder-3e67-account-create-update-zcf8m\" (UID: \"e5266478-3a00-466d-8e24-6a9e0d3edb29\") " pod="cinder-kuttl-tests/cinder-3e67-account-create-update-zcf8m" Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.219966 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5266478-3a00-466d-8e24-6a9e0d3edb29-operator-scripts\") pod \"cinder-3e67-account-create-update-zcf8m\" (UID: \"e5266478-3a00-466d-8e24-6a9e0d3edb29\") " pod="cinder-kuttl-tests/cinder-3e67-account-create-update-zcf8m" Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.220128 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvdrq\" (UniqueName: \"kubernetes.io/projected/c1d0d236-2728-44cf-b723-8de09f743481-kube-api-access-zvdrq\") pod \"cinder-db-create-5zk6x\" (UID: \"c1d0d236-2728-44cf-b723-8de09f743481\") " pod="cinder-kuttl-tests/cinder-db-create-5zk6x" Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.220287 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1d0d236-2728-44cf-b723-8de09f743481-operator-scripts\") pod \"cinder-db-create-5zk6x\" (UID: \"c1d0d236-2728-44cf-b723-8de09f743481\") " pod="cinder-kuttl-tests/cinder-db-create-5zk6x" Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.220758 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5266478-3a00-466d-8e24-6a9e0d3edb29-operator-scripts\") pod \"cinder-3e67-account-create-update-zcf8m\" (UID: \"e5266478-3a00-466d-8e24-6a9e0d3edb29\") " pod="cinder-kuttl-tests/cinder-3e67-account-create-update-zcf8m" Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.241662 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9hbf\" (UniqueName: \"kubernetes.io/projected/e5266478-3a00-466d-8e24-6a9e0d3edb29-kube-api-access-z9hbf\") pod \"cinder-3e67-account-create-update-zcf8m\" (UID: \"e5266478-3a00-466d-8e24-6a9e0d3edb29\") " pod="cinder-kuttl-tests/cinder-3e67-account-create-update-zcf8m" Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.242066 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvdrq\" (UniqueName: \"kubernetes.io/projected/c1d0d236-2728-44cf-b723-8de09f743481-kube-api-access-zvdrq\") pod \"cinder-db-create-5zk6x\" (UID: \"c1d0d236-2728-44cf-b723-8de09f743481\") " pod="cinder-kuttl-tests/cinder-db-create-5zk6x" Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.375919 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-5zk6x" Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.395878 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-3e67-account-create-update-zcf8m" Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.838299 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-5zk6x"] Jan 30 22:01:25 crc kubenswrapper[4869]: W0130 22:01:25.847847 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1d0d236_2728_44cf_b723_8de09f743481.slice/crio-9488b00f1abc05af60baaaa6d825ab2d6fc1107d4fa46391d5f0f544916185c6 WatchSource:0}: Error finding container 9488b00f1abc05af60baaaa6d825ab2d6fc1107d4fa46391d5f0f544916185c6: Status 404 returned error can't find the container with id 9488b00f1abc05af60baaaa6d825ab2d6fc1107d4fa46391d5f0f544916185c6 Jan 30 22:01:25 crc kubenswrapper[4869]: I0130 22:01:25.913625 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-3e67-account-create-update-zcf8m"] Jan 30 22:01:25 crc kubenswrapper[4869]: W0130 22:01:25.923742 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5266478_3a00_466d_8e24_6a9e0d3edb29.slice/crio-d114eb2f92988326c796741c7e0b9d04b1f3a6cdc70299b23af0772f61e2eeac WatchSource:0}: Error finding container d114eb2f92988326c796741c7e0b9d04b1f3a6cdc70299b23af0772f61e2eeac: Status 404 returned error can't find the container with id d114eb2f92988326c796741c7e0b9d04b1f3a6cdc70299b23af0772f61e2eeac Jan 30 22:01:26 crc kubenswrapper[4869]: I0130 22:01:26.051167 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-3e67-account-create-update-zcf8m" event={"ID":"e5266478-3a00-466d-8e24-6a9e0d3edb29","Type":"ContainerStarted","Data":"d114eb2f92988326c796741c7e0b9d04b1f3a6cdc70299b23af0772f61e2eeac"} Jan 30 22:01:26 crc kubenswrapper[4869]: I0130 22:01:26.056442 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-5zk6x" event={"ID":"c1d0d236-2728-44cf-b723-8de09f743481","Type":"ContainerStarted","Data":"9488b00f1abc05af60baaaa6d825ab2d6fc1107d4fa46391d5f0f544916185c6"} Jan 30 22:01:27 crc kubenswrapper[4869]: I0130 22:01:27.064040 4869 generic.go:334] "Generic (PLEG): container finished" podID="e5266478-3a00-466d-8e24-6a9e0d3edb29" containerID="acbb5e1c647f39750781d15018efb6f5b4a76bca075543b686f384d0c985155e" exitCode=0 Jan 30 22:01:27 crc kubenswrapper[4869]: I0130 22:01:27.064172 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-3e67-account-create-update-zcf8m" event={"ID":"e5266478-3a00-466d-8e24-6a9e0d3edb29","Type":"ContainerDied","Data":"acbb5e1c647f39750781d15018efb6f5b4a76bca075543b686f384d0c985155e"} Jan 30 22:01:27 crc kubenswrapper[4869]: I0130 22:01:27.066299 4869 generic.go:334] "Generic (PLEG): container finished" podID="c1d0d236-2728-44cf-b723-8de09f743481" containerID="ed844f52c3dec4da536f571975e88d08b9d65c667fbc17cff09dfa92f9de7a5a" exitCode=0 Jan 30 22:01:27 crc kubenswrapper[4869]: I0130 22:01:27.066373 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-5zk6x" event={"ID":"c1d0d236-2728-44cf-b723-8de09f743481","Type":"ContainerDied","Data":"ed844f52c3dec4da536f571975e88d08b9d65c667fbc17cff09dfa92f9de7a5a"} Jan 30 22:01:28 crc kubenswrapper[4869]: I0130 22:01:28.409030 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-3e67-account-create-update-zcf8m" Jan 30 22:01:28 crc kubenswrapper[4869]: I0130 22:01:28.413866 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-5zk6x" Jan 30 22:01:28 crc kubenswrapper[4869]: I0130 22:01:28.485326 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9hbf\" (UniqueName: \"kubernetes.io/projected/e5266478-3a00-466d-8e24-6a9e0d3edb29-kube-api-access-z9hbf\") pod \"e5266478-3a00-466d-8e24-6a9e0d3edb29\" (UID: \"e5266478-3a00-466d-8e24-6a9e0d3edb29\") " Jan 30 22:01:28 crc kubenswrapper[4869]: I0130 22:01:28.485449 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvdrq\" (UniqueName: \"kubernetes.io/projected/c1d0d236-2728-44cf-b723-8de09f743481-kube-api-access-zvdrq\") pod \"c1d0d236-2728-44cf-b723-8de09f743481\" (UID: \"c1d0d236-2728-44cf-b723-8de09f743481\") " Jan 30 22:01:28 crc kubenswrapper[4869]: I0130 22:01:28.485540 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1d0d236-2728-44cf-b723-8de09f743481-operator-scripts\") pod \"c1d0d236-2728-44cf-b723-8de09f743481\" (UID: \"c1d0d236-2728-44cf-b723-8de09f743481\") " Jan 30 22:01:28 crc kubenswrapper[4869]: I0130 22:01:28.485564 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5266478-3a00-466d-8e24-6a9e0d3edb29-operator-scripts\") pod \"e5266478-3a00-466d-8e24-6a9e0d3edb29\" (UID: \"e5266478-3a00-466d-8e24-6a9e0d3edb29\") " Jan 30 22:01:28 crc kubenswrapper[4869]: I0130 22:01:28.486170 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5266478-3a00-466d-8e24-6a9e0d3edb29-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5266478-3a00-466d-8e24-6a9e0d3edb29" (UID: "e5266478-3a00-466d-8e24-6a9e0d3edb29"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:28 crc kubenswrapper[4869]: I0130 22:01:28.486404 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1d0d236-2728-44cf-b723-8de09f743481-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1d0d236-2728-44cf-b723-8de09f743481" (UID: "c1d0d236-2728-44cf-b723-8de09f743481"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:28 crc kubenswrapper[4869]: I0130 22:01:28.492954 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1d0d236-2728-44cf-b723-8de09f743481-kube-api-access-zvdrq" (OuterVolumeSpecName: "kube-api-access-zvdrq") pod "c1d0d236-2728-44cf-b723-8de09f743481" (UID: "c1d0d236-2728-44cf-b723-8de09f743481"). InnerVolumeSpecName "kube-api-access-zvdrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:28 crc kubenswrapper[4869]: I0130 22:01:28.493196 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5266478-3a00-466d-8e24-6a9e0d3edb29-kube-api-access-z9hbf" (OuterVolumeSpecName: "kube-api-access-z9hbf") pod "e5266478-3a00-466d-8e24-6a9e0d3edb29" (UID: "e5266478-3a00-466d-8e24-6a9e0d3edb29"). InnerVolumeSpecName "kube-api-access-z9hbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:28 crc kubenswrapper[4869]: I0130 22:01:28.587661 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1d0d236-2728-44cf-b723-8de09f743481-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:28 crc kubenswrapper[4869]: I0130 22:01:28.587715 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5266478-3a00-466d-8e24-6a9e0d3edb29-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:28 crc kubenswrapper[4869]: I0130 22:01:28.587729 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9hbf\" (UniqueName: \"kubernetes.io/projected/e5266478-3a00-466d-8e24-6a9e0d3edb29-kube-api-access-z9hbf\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:28 crc kubenswrapper[4869]: I0130 22:01:28.587751 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvdrq\" (UniqueName: \"kubernetes.io/projected/c1d0d236-2728-44cf-b723-8de09f743481-kube-api-access-zvdrq\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:29 crc kubenswrapper[4869]: I0130 22:01:29.082567 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-5zk6x" event={"ID":"c1d0d236-2728-44cf-b723-8de09f743481","Type":"ContainerDied","Data":"9488b00f1abc05af60baaaa6d825ab2d6fc1107d4fa46391d5f0f544916185c6"} Jan 30 22:01:29 crc kubenswrapper[4869]: I0130 22:01:29.082612 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-5zk6x" Jan 30 22:01:29 crc kubenswrapper[4869]: I0130 22:01:29.082646 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9488b00f1abc05af60baaaa6d825ab2d6fc1107d4fa46391d5f0f544916185c6" Jan 30 22:01:29 crc kubenswrapper[4869]: I0130 22:01:29.084819 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-3e67-account-create-update-zcf8m" event={"ID":"e5266478-3a00-466d-8e24-6a9e0d3edb29","Type":"ContainerDied","Data":"d114eb2f92988326c796741c7e0b9d04b1f3a6cdc70299b23af0772f61e2eeac"} Jan 30 22:01:29 crc kubenswrapper[4869]: I0130 22:01:29.084865 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d114eb2f92988326c796741c7e0b9d04b1f3a6cdc70299b23af0772f61e2eeac" Jan 30 22:01:29 crc kubenswrapper[4869]: I0130 22:01:29.084939 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-3e67-account-create-update-zcf8m" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.269542 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-xpr5d"] Jan 30 22:01:30 crc kubenswrapper[4869]: E0130 22:01:30.270074 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5266478-3a00-466d-8e24-6a9e0d3edb29" containerName="mariadb-account-create-update" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.270087 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5266478-3a00-466d-8e24-6a9e0d3edb29" containerName="mariadb-account-create-update" Jan 30 22:01:30 crc kubenswrapper[4869]: E0130 22:01:30.270112 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1d0d236-2728-44cf-b723-8de09f743481" containerName="mariadb-database-create" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.270118 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1d0d236-2728-44cf-b723-8de09f743481" containerName="mariadb-database-create" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.270245 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5266478-3a00-466d-8e24-6a9e0d3edb29" containerName="mariadb-account-create-update" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.270259 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1d0d236-2728-44cf-b723-8de09f743481" containerName="mariadb-database-create" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.270702 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.273033 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-config-data" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.273069 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-cinder-dockercfg-6wxf9" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.273666 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scripts" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.286843 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-xpr5d"] Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.413768 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73d220bd-14d6-4146-a2a4-bad4060711bb-config-data\") pod \"cinder-db-sync-xpr5d\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.413941 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73d220bd-14d6-4146-a2a4-bad4060711bb-db-sync-config-data\") pod \"cinder-db-sync-xpr5d\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.414037 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73d220bd-14d6-4146-a2a4-bad4060711bb-scripts\") pod \"cinder-db-sync-xpr5d\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.414139 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73d220bd-14d6-4146-a2a4-bad4060711bb-etc-machine-id\") pod \"cinder-db-sync-xpr5d\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.414242 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd8fg\" (UniqueName: \"kubernetes.io/projected/73d220bd-14d6-4146-a2a4-bad4060711bb-kube-api-access-bd8fg\") pod \"cinder-db-sync-xpr5d\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.515655 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd8fg\" (UniqueName: \"kubernetes.io/projected/73d220bd-14d6-4146-a2a4-bad4060711bb-kube-api-access-bd8fg\") pod \"cinder-db-sync-xpr5d\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.515743 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73d220bd-14d6-4146-a2a4-bad4060711bb-config-data\") pod \"cinder-db-sync-xpr5d\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.515789 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73d220bd-14d6-4146-a2a4-bad4060711bb-db-sync-config-data\") pod \"cinder-db-sync-xpr5d\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.515824 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73d220bd-14d6-4146-a2a4-bad4060711bb-scripts\") pod \"cinder-db-sync-xpr5d\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.515872 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73d220bd-14d6-4146-a2a4-bad4060711bb-etc-machine-id\") pod \"cinder-db-sync-xpr5d\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.516006 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73d220bd-14d6-4146-a2a4-bad4060711bb-etc-machine-id\") pod \"cinder-db-sync-xpr5d\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.519771 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73d220bd-14d6-4146-a2a4-bad4060711bb-scripts\") pod \"cinder-db-sync-xpr5d\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.520077 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73d220bd-14d6-4146-a2a4-bad4060711bb-config-data\") pod \"cinder-db-sync-xpr5d\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.521021 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73d220bd-14d6-4146-a2a4-bad4060711bb-db-sync-config-data\") pod \"cinder-db-sync-xpr5d\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.532166 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd8fg\" (UniqueName: \"kubernetes.io/projected/73d220bd-14d6-4146-a2a4-bad4060711bb-kube-api-access-bd8fg\") pod \"cinder-db-sync-xpr5d\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:01:30 crc kubenswrapper[4869]: I0130 22:01:30.611756 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:01:31 crc kubenswrapper[4869]: I0130 22:01:31.015192 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-xpr5d"] Jan 30 22:01:31 crc kubenswrapper[4869]: W0130 22:01:31.024054 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73d220bd_14d6_4146_a2a4_bad4060711bb.slice/crio-1a4a66855b346927e205c4203c029f597c3a959cc3037d837f0489d7181f1876 WatchSource:0}: Error finding container 1a4a66855b346927e205c4203c029f597c3a959cc3037d837f0489d7181f1876: Status 404 returned error can't find the container with id 1a4a66855b346927e205c4203c029f597c3a959cc3037d837f0489d7181f1876 Jan 30 22:01:31 crc kubenswrapper[4869]: I0130 22:01:31.098593 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" event={"ID":"73d220bd-14d6-4146-a2a4-bad4060711bb","Type":"ContainerStarted","Data":"1a4a66855b346927e205c4203c029f597c3a959cc3037d837f0489d7181f1876"} Jan 30 22:01:37 crc kubenswrapper[4869]: I0130 22:01:37.417088 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:01:48 crc kubenswrapper[4869]: E0130 22:01:48.216556 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 30 22:01:48 crc kubenswrapper[4869]: E0130 22:01:48.217254 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bd8fg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-xpr5d_cinder-kuttl-tests(73d220bd-14d6-4146-a2a4-bad4060711bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:01:48 crc kubenswrapper[4869]: E0130 22:01:48.218469 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" podUID="73d220bd-14d6-4146-a2a4-bad4060711bb" Jan 30 22:01:48 crc kubenswrapper[4869]: E0130 22:01:48.250458 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" podUID="73d220bd-14d6-4146-a2a4-bad4060711bb" Jan 30 22:02:04 crc kubenswrapper[4869]: I0130 22:02:04.350302 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" event={"ID":"73d220bd-14d6-4146-a2a4-bad4060711bb","Type":"ContainerStarted","Data":"c757126fd6532f4cf5c4bb163ce8f6996b20ad2869c1493e2a0c112450bb386f"} Jan 30 22:02:04 crc kubenswrapper[4869]: I0130 22:02:04.374381 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" podStartSLOduration=2.086041443 podStartE2EDuration="34.374359533s" podCreationTimestamp="2026-01-30 22:01:30 +0000 UTC" firstStartedPulling="2026-01-30 22:01:31.02598666 +0000 UTC m=+1091.911744685" lastFinishedPulling="2026-01-30 22:02:03.31430475 +0000 UTC m=+1124.200062775" observedRunningTime="2026-01-30 22:02:04.368110506 +0000 UTC m=+1125.253868531" watchObservedRunningTime="2026-01-30 22:02:04.374359533 +0000 UTC m=+1125.260117568" Jan 30 22:02:09 crc kubenswrapper[4869]: I0130 22:02:09.390163 4869 generic.go:334] "Generic (PLEG): container finished" podID="73d220bd-14d6-4146-a2a4-bad4060711bb" containerID="c757126fd6532f4cf5c4bb163ce8f6996b20ad2869c1493e2a0c112450bb386f" exitCode=0 Jan 30 22:02:09 crc kubenswrapper[4869]: I0130 22:02:09.390262 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" event={"ID":"73d220bd-14d6-4146-a2a4-bad4060711bb","Type":"ContainerDied","Data":"c757126fd6532f4cf5c4bb163ce8f6996b20ad2869c1493e2a0c112450bb386f"} Jan 30 22:02:10 crc kubenswrapper[4869]: I0130 22:02:10.677622 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:02:10 crc kubenswrapper[4869]: I0130 22:02:10.845004 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73d220bd-14d6-4146-a2a4-bad4060711bb-etc-machine-id\") pod \"73d220bd-14d6-4146-a2a4-bad4060711bb\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " Jan 30 22:02:10 crc kubenswrapper[4869]: I0130 22:02:10.845075 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd8fg\" (UniqueName: \"kubernetes.io/projected/73d220bd-14d6-4146-a2a4-bad4060711bb-kube-api-access-bd8fg\") pod \"73d220bd-14d6-4146-a2a4-bad4060711bb\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " Jan 30 22:02:10 crc kubenswrapper[4869]: I0130 22:02:10.845112 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73d220bd-14d6-4146-a2a4-bad4060711bb-db-sync-config-data\") pod \"73d220bd-14d6-4146-a2a4-bad4060711bb\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " Jan 30 22:02:10 crc kubenswrapper[4869]: I0130 22:02:10.845220 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73d220bd-14d6-4146-a2a4-bad4060711bb-config-data\") pod \"73d220bd-14d6-4146-a2a4-bad4060711bb\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " Jan 30 22:02:10 crc kubenswrapper[4869]: I0130 22:02:10.845258 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73d220bd-14d6-4146-a2a4-bad4060711bb-scripts\") pod \"73d220bd-14d6-4146-a2a4-bad4060711bb\" (UID: \"73d220bd-14d6-4146-a2a4-bad4060711bb\") " Jan 30 22:02:10 crc kubenswrapper[4869]: I0130 22:02:10.845264 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73d220bd-14d6-4146-a2a4-bad4060711bb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "73d220bd-14d6-4146-a2a4-bad4060711bb" (UID: "73d220bd-14d6-4146-a2a4-bad4060711bb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:02:10 crc kubenswrapper[4869]: I0130 22:02:10.846181 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73d220bd-14d6-4146-a2a4-bad4060711bb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:10 crc kubenswrapper[4869]: I0130 22:02:10.851573 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73d220bd-14d6-4146-a2a4-bad4060711bb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "73d220bd-14d6-4146-a2a4-bad4060711bb" (UID: "73d220bd-14d6-4146-a2a4-bad4060711bb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:10 crc kubenswrapper[4869]: I0130 22:02:10.851962 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d220bd-14d6-4146-a2a4-bad4060711bb-kube-api-access-bd8fg" (OuterVolumeSpecName: "kube-api-access-bd8fg") pod "73d220bd-14d6-4146-a2a4-bad4060711bb" (UID: "73d220bd-14d6-4146-a2a4-bad4060711bb"). InnerVolumeSpecName "kube-api-access-bd8fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:10 crc kubenswrapper[4869]: I0130 22:02:10.852312 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73d220bd-14d6-4146-a2a4-bad4060711bb-scripts" (OuterVolumeSpecName: "scripts") pod "73d220bd-14d6-4146-a2a4-bad4060711bb" (UID: "73d220bd-14d6-4146-a2a4-bad4060711bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:10 crc kubenswrapper[4869]: I0130 22:02:10.916108 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73d220bd-14d6-4146-a2a4-bad4060711bb-config-data" (OuterVolumeSpecName: "config-data") pod "73d220bd-14d6-4146-a2a4-bad4060711bb" (UID: "73d220bd-14d6-4146-a2a4-bad4060711bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:10 crc kubenswrapper[4869]: I0130 22:02:10.949449 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73d220bd-14d6-4146-a2a4-bad4060711bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:10 crc kubenswrapper[4869]: I0130 22:02:10.949498 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73d220bd-14d6-4146-a2a4-bad4060711bb-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:10 crc kubenswrapper[4869]: I0130 22:02:10.949513 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd8fg\" (UniqueName: \"kubernetes.io/projected/73d220bd-14d6-4146-a2a4-bad4060711bb-kube-api-access-bd8fg\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:10 crc kubenswrapper[4869]: I0130 22:02:10.949527 4869 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73d220bd-14d6-4146-a2a4-bad4060711bb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.406959 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" event={"ID":"73d220bd-14d6-4146-a2a4-bad4060711bb","Type":"ContainerDied","Data":"1a4a66855b346927e205c4203c029f597c3a959cc3037d837f0489d7181f1876"} Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.407012 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a4a66855b346927e205c4203c029f597c3a959cc3037d837f0489d7181f1876" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.407046 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-xpr5d" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.719327 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 30 22:02:11 crc kubenswrapper[4869]: E0130 22:02:11.719690 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d220bd-14d6-4146-a2a4-bad4060711bb" containerName="cinder-db-sync" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.719708 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d220bd-14d6-4146-a2a4-bad4060711bb" containerName="cinder-db-sync" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.719883 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="73d220bd-14d6-4146-a2a4-bad4060711bb" containerName="cinder-db-sync" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.721206 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.722866 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scheduler-config-data" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.724170 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scripts" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.724601 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-cinder-dockercfg-6wxf9" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.730504 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.753172 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-config-data" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.835489 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.836843 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.843133 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-backup-config-data" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.848800 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.854174 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.857286 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-volume-volume1-config-data" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.861567 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1b895b0-ebcd-4f90-ae7b-633961d007a4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.861616 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b895b0-ebcd-4f90-ae7b-633961d007a4-config-data\") pod \"cinder-scheduler-0\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.861679 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1b895b0-ebcd-4f90-ae7b-633961d007a4-scripts\") pod \"cinder-scheduler-0\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.861718 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1b895b0-ebcd-4f90-ae7b-633961d007a4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.861738 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzcv4\" (UniqueName: \"kubernetes.io/projected/e1b895b0-ebcd-4f90-ae7b-633961d007a4-kube-api-access-xzcv4\") pod \"cinder-scheduler-0\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.867275 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.907315 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.962641 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-476n9\" (UniqueName: \"kubernetes.io/projected/a748ea99-7369-48cb-8983-9f41ff077f82-kube-api-access-476n9\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.962705 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-lib-modules\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.962721 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-dev\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.962741 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e985d3-f817-41ac-919d-9625124f4fcd-config-data\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.962756 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsrgv\" (UniqueName: \"kubernetes.io/projected/43e985d3-f817-41ac-919d-9625124f4fcd-kube-api-access-dsrgv\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.962778 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b895b0-ebcd-4f90-ae7b-633961d007a4-config-data\") pod \"cinder-scheduler-0\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.962794 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-sys\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.962813 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.962839 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.962857 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-run\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.962877 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.962891 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.962935 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.962952 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1b895b0-ebcd-4f90-ae7b-633961d007a4-scripts\") pod \"cinder-scheduler-0\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.962967 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-sys\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.962984 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.962998 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.963015 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.963032 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.963046 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-run\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.963065 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-etc-nvme\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.963078 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.963120 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.963142 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-dev\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.963157 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1b895b0-ebcd-4f90-ae7b-633961d007a4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.963176 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzcv4\" (UniqueName: \"kubernetes.io/projected/e1b895b0-ebcd-4f90-ae7b-633961d007a4-kube-api-access-xzcv4\") pod \"cinder-scheduler-0\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.963191 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.963209 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.963226 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.963248 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.963262 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43e985d3-f817-41ac-919d-9625124f4fcd-scripts\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.963278 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1b895b0-ebcd-4f90-ae7b-633961d007a4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.963295 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43e985d3-f817-41ac-919d-9625124f4fcd-config-data-custom\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.964538 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1b895b0-ebcd-4f90-ae7b-633961d007a4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.970355 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b895b0-ebcd-4f90-ae7b-633961d007a4-config-data\") pod \"cinder-scheduler-0\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.971753 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1b895b0-ebcd-4f90-ae7b-633961d007a4-scripts\") pod \"cinder-scheduler-0\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.971767 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1b895b0-ebcd-4f90-ae7b-633961d007a4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:11 crc kubenswrapper[4869]: I0130 22:02:11.996078 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzcv4\" (UniqueName: \"kubernetes.io/projected/e1b895b0-ebcd-4f90-ae7b-633961d007a4-kube-api-access-xzcv4\") pod \"cinder-scheduler-0\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.004341 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.005395 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.007882 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-api-config-data" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.015858 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064277 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-dev\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064331 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-lib-modules\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064361 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e985d3-f817-41ac-919d-9625124f4fcd-config-data\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064384 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsrgv\" (UniqueName: \"kubernetes.io/projected/43e985d3-f817-41ac-919d-9625124f4fcd-kube-api-access-dsrgv\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064421 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-dev\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064431 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-sys\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064484 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-sys\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064503 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-lib-modules\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064532 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064570 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064596 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-run\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064618 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064636 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064674 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064692 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-sys\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064711 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064726 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064748 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064766 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064781 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-run\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064800 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-etc-nvme\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064815 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064830 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064880 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-dev\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064912 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064935 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064952 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064972 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.064989 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43e985d3-f817-41ac-919d-9625124f4fcd-scripts\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.065010 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43e985d3-f817-41ac-919d-9625124f4fcd-config-data-custom\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.065031 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-476n9\" (UniqueName: \"kubernetes.io/projected/a748ea99-7369-48cb-8983-9f41ff077f82-kube-api-access-476n9\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.065274 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-etc-nvme\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.065313 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.065360 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.065396 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-run\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.065425 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.065428 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.065461 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.065511 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.066138 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.066171 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-sys\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.066285 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.066327 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.066352 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.066395 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.066433 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-run\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.067434 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-dev\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.067537 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.068104 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.070194 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e985d3-f817-41ac-919d-9625124f4fcd-config-data\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.070642 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43e985d3-f817-41ac-919d-9625124f4fcd-config-data-custom\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.070776 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43e985d3-f817-41ac-919d-9625124f4fcd-scripts\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.071524 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.072360 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.099950 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-476n9\" (UniqueName: \"kubernetes.io/projected/a748ea99-7369-48cb-8983-9f41ff077f82-kube-api-access-476n9\") pod \"cinder-volume-volume1-0\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.100578 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsrgv\" (UniqueName: \"kubernetes.io/projected/43e985d3-f817-41ac-919d-9625124f4fcd-kube-api-access-dsrgv\") pod \"cinder-backup-0\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.102502 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.166603 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/104613e6-ac18-41c4-8f61-abcdf0399885-config-data-custom\") pod \"cinder-api-0\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.166681 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/104613e6-ac18-41c4-8f61-abcdf0399885-logs\") pod \"cinder-api-0\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.166711 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/104613e6-ac18-41c4-8f61-abcdf0399885-etc-machine-id\") pod \"cinder-api-0\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.166748 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm5fd\" (UniqueName: \"kubernetes.io/projected/104613e6-ac18-41c4-8f61-abcdf0399885-kube-api-access-xm5fd\") pod \"cinder-api-0\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.166791 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/104613e6-ac18-41c4-8f61-abcdf0399885-config-data\") pod \"cinder-api-0\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.166830 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/104613e6-ac18-41c4-8f61-abcdf0399885-scripts\") pod \"cinder-api-0\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.168981 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.192681 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.267861 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/104613e6-ac18-41c4-8f61-abcdf0399885-scripts\") pod \"cinder-api-0\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.268018 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/104613e6-ac18-41c4-8f61-abcdf0399885-config-data-custom\") pod \"cinder-api-0\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.268068 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/104613e6-ac18-41c4-8f61-abcdf0399885-logs\") pod \"cinder-api-0\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.268088 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/104613e6-ac18-41c4-8f61-abcdf0399885-etc-machine-id\") pod \"cinder-api-0\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.268116 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm5fd\" (UniqueName: \"kubernetes.io/projected/104613e6-ac18-41c4-8f61-abcdf0399885-kube-api-access-xm5fd\") pod \"cinder-api-0\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.268152 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/104613e6-ac18-41c4-8f61-abcdf0399885-config-data\") pod \"cinder-api-0\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.269816 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/104613e6-ac18-41c4-8f61-abcdf0399885-logs\") pod \"cinder-api-0\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.270326 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/104613e6-ac18-41c4-8f61-abcdf0399885-etc-machine-id\") pod \"cinder-api-0\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.274875 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/104613e6-ac18-41c4-8f61-abcdf0399885-config-data-custom\") pod \"cinder-api-0\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.275003 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/104613e6-ac18-41c4-8f61-abcdf0399885-scripts\") pod \"cinder-api-0\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.275114 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/104613e6-ac18-41c4-8f61-abcdf0399885-config-data\") pod \"cinder-api-0\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.290174 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm5fd\" (UniqueName: \"kubernetes.io/projected/104613e6-ac18-41c4-8f61-abcdf0399885-kube-api-access-xm5fd\") pod \"cinder-api-0\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.363399 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.638604 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.698496 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.737309 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 30 22:02:12 crc kubenswrapper[4869]: I0130 22:02:12.944150 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 30 22:02:12 crc kubenswrapper[4869]: W0130 22:02:12.948879 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod104613e6_ac18_41c4_8f61_abcdf0399885.slice/crio-b7539ad9d7f96fdc95f8dd714ae1d4fef6a10945ec1714dda7c4fe2c8cbaa1c3 WatchSource:0}: Error finding container b7539ad9d7f96fdc95f8dd714ae1d4fef6a10945ec1714dda7c4fe2c8cbaa1c3: Status 404 returned error can't find the container with id b7539ad9d7f96fdc95f8dd714ae1d4fef6a10945ec1714dda7c4fe2c8cbaa1c3 Jan 30 22:02:13 crc kubenswrapper[4869]: I0130 22:02:13.438468 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"e1b895b0-ebcd-4f90-ae7b-633961d007a4","Type":"ContainerStarted","Data":"25a0b9c206622d850461d1056d024e8af526d8bc266a8178484d3522fd3e6bb2"} Jan 30 22:02:13 crc kubenswrapper[4869]: I0130 22:02:13.439565 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"43e985d3-f817-41ac-919d-9625124f4fcd","Type":"ContainerStarted","Data":"ee640bfd32cda0727ac5e4fbdf83c88e869db885377249347fed4136c448b38e"} Jan 30 22:02:13 crc kubenswrapper[4869]: I0130 22:02:13.443205 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerStarted","Data":"6425717b7bdfb81b8482d5f4a1e1b4b8c1e4a02b23dbd00f5d2e892e3ed8d127"} Jan 30 22:02:13 crc kubenswrapper[4869]: I0130 22:02:13.454119 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"104613e6-ac18-41c4-8f61-abcdf0399885","Type":"ContainerStarted","Data":"b7539ad9d7f96fdc95f8dd714ae1d4fef6a10945ec1714dda7c4fe2c8cbaa1c3"} Jan 30 22:02:14 crc kubenswrapper[4869]: I0130 22:02:14.463722 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerStarted","Data":"aac066990014154f8593ca5e8916003e4c42e5918b39f71959f082a99fd94624"} Jan 30 22:02:14 crc kubenswrapper[4869]: I0130 22:02:14.464405 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerStarted","Data":"1816da16ba6f132ed8735cf7ffd5d756898c2f411035e3f16b271ec139e90c95"} Jan 30 22:02:14 crc kubenswrapper[4869]: I0130 22:02:14.470251 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"104613e6-ac18-41c4-8f61-abcdf0399885","Type":"ContainerStarted","Data":"63f5176c6505ed7c59249b1beaa617a3113094b20a46d78878081a42555f3fc2"} Jan 30 22:02:14 crc kubenswrapper[4869]: I0130 22:02:14.470315 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"104613e6-ac18-41c4-8f61-abcdf0399885","Type":"ContainerStarted","Data":"be7c1e2143e6211b03b0d771a3fe281a885c9d88c06c4df341c9a69e5807c672"} Jan 30 22:02:14 crc kubenswrapper[4869]: I0130 22:02:14.470426 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:14 crc kubenswrapper[4869]: I0130 22:02:14.473651 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"e1b895b0-ebcd-4f90-ae7b-633961d007a4","Type":"ContainerStarted","Data":"405249c3040942893a3cc2649766ede319a616f3e6290634faa7245dc8b085f1"} Jan 30 22:02:14 crc kubenswrapper[4869]: I0130 22:02:14.475444 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"43e985d3-f817-41ac-919d-9625124f4fcd","Type":"ContainerStarted","Data":"22c27a6999604729fe94cc707b767ac1a410e22f58d8992b96b97c642b095f1a"} Jan 30 22:02:14 crc kubenswrapper[4869]: I0130 22:02:14.475471 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"43e985d3-f817-41ac-919d-9625124f4fcd","Type":"ContainerStarted","Data":"9767c8399608620e9e185888f758761f7fbb19790189630f3b39670088d1f93a"} Jan 30 22:02:14 crc kubenswrapper[4869]: I0130 22:02:14.500152 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podStartSLOduration=2.297381034 podStartE2EDuration="3.500107079s" podCreationTimestamp="2026-01-30 22:02:11 +0000 UTC" firstStartedPulling="2026-01-30 22:02:12.746047463 +0000 UTC m=+1133.631805498" lastFinishedPulling="2026-01-30 22:02:13.948773518 +0000 UTC m=+1134.834531543" observedRunningTime="2026-01-30 22:02:14.494750821 +0000 UTC m=+1135.380508846" watchObservedRunningTime="2026-01-30 22:02:14.500107079 +0000 UTC m=+1135.385865104" Jan 30 22:02:14 crc kubenswrapper[4869]: I0130 22:02:14.533826 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-backup-0" podStartSLOduration=2.77884573 podStartE2EDuration="3.533807198s" podCreationTimestamp="2026-01-30 22:02:11 +0000 UTC" firstStartedPulling="2026-01-30 22:02:12.713566402 +0000 UTC m=+1133.599324427" lastFinishedPulling="2026-01-30 22:02:13.46852787 +0000 UTC m=+1134.354285895" observedRunningTime="2026-01-30 22:02:14.532710533 +0000 UTC m=+1135.418468568" watchObservedRunningTime="2026-01-30 22:02:14.533807198 +0000 UTC m=+1135.419565223" Jan 30 22:02:14 crc kubenswrapper[4869]: I0130 22:02:14.556472 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-0" podStartSLOduration=3.556449589 podStartE2EDuration="3.556449589s" podCreationTimestamp="2026-01-30 22:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:14.552436343 +0000 UTC m=+1135.438194368" watchObservedRunningTime="2026-01-30 22:02:14.556449589 +0000 UTC m=+1135.442207614" Jan 30 22:02:15 crc kubenswrapper[4869]: I0130 22:02:15.496103 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"e1b895b0-ebcd-4f90-ae7b-633961d007a4","Type":"ContainerStarted","Data":"c324f8439d3e895528b2f2380b00c266d6c750d360d0227d35c563e523f6f44c"} Jan 30 22:02:15 crc kubenswrapper[4869]: I0130 22:02:15.520031 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-scheduler-0" podStartSLOduration=3.933650059 podStartE2EDuration="4.51999513s" podCreationTimestamp="2026-01-30 22:02:11 +0000 UTC" firstStartedPulling="2026-01-30 22:02:12.648610521 +0000 UTC m=+1133.534368546" lastFinishedPulling="2026-01-30 22:02:13.234955592 +0000 UTC m=+1134.120713617" observedRunningTime="2026-01-30 22:02:15.51300938 +0000 UTC m=+1136.398767405" watchObservedRunningTime="2026-01-30 22:02:15.51999513 +0000 UTC m=+1136.405753155" Jan 30 22:02:17 crc kubenswrapper[4869]: I0130 22:02:17.103404 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:17 crc kubenswrapper[4869]: I0130 22:02:17.169907 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:17 crc kubenswrapper[4869]: I0130 22:02:17.194621 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:17 crc kubenswrapper[4869]: I0130 22:02:17.524107 4869 generic.go:334] "Generic (PLEG): container finished" podID="a748ea99-7369-48cb-8983-9f41ff077f82" containerID="aac066990014154f8593ca5e8916003e4c42e5918b39f71959f082a99fd94624" exitCode=1 Jan 30 22:02:17 crc kubenswrapper[4869]: I0130 22:02:17.524152 4869 generic.go:334] "Generic (PLEG): container finished" podID="a748ea99-7369-48cb-8983-9f41ff077f82" containerID="1816da16ba6f132ed8735cf7ffd5d756898c2f411035e3f16b271ec139e90c95" exitCode=1 Jan 30 22:02:17 crc kubenswrapper[4869]: I0130 22:02:17.524206 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerDied","Data":"aac066990014154f8593ca5e8916003e4c42e5918b39f71959f082a99fd94624"} Jan 30 22:02:17 crc kubenswrapper[4869]: I0130 22:02:17.524263 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerDied","Data":"1816da16ba6f132ed8735cf7ffd5d756898c2f411035e3f16b271ec139e90c95"} Jan 30 22:02:17 crc kubenswrapper[4869]: I0130 22:02:17.525109 4869 scope.go:117] "RemoveContainer" containerID="1816da16ba6f132ed8735cf7ffd5d756898c2f411035e3f16b271ec139e90c95" Jan 30 22:02:17 crc kubenswrapper[4869]: I0130 22:02:17.525158 4869 scope.go:117] "RemoveContainer" containerID="aac066990014154f8593ca5e8916003e4c42e5918b39f71959f082a99fd94624" Jan 30 22:02:18 crc kubenswrapper[4869]: I0130 22:02:18.194597 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:18 crc kubenswrapper[4869]: I0130 22:02:18.533442 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerStarted","Data":"0bef61d842ec402ec7353d51ba811c0a085833b55df9588e81c951508b6b0523"} Jan 30 22:02:18 crc kubenswrapper[4869]: I0130 22:02:18.533809 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerStarted","Data":"fc93ec2172f48742e12eb47bcd97f2a3b220a955cf84e25a3daaa9b3a7d88d6d"} Jan 30 22:02:20 crc kubenswrapper[4869]: I0130 22:02:20.556284 4869 generic.go:334] "Generic (PLEG): container finished" podID="a748ea99-7369-48cb-8983-9f41ff077f82" containerID="0bef61d842ec402ec7353d51ba811c0a085833b55df9588e81c951508b6b0523" exitCode=1 Jan 30 22:02:20 crc kubenswrapper[4869]: I0130 22:02:20.556925 4869 generic.go:334] "Generic (PLEG): container finished" podID="a748ea99-7369-48cb-8983-9f41ff077f82" containerID="fc93ec2172f48742e12eb47bcd97f2a3b220a955cf84e25a3daaa9b3a7d88d6d" exitCode=1 Jan 30 22:02:20 crc kubenswrapper[4869]: I0130 22:02:20.556388 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerDied","Data":"0bef61d842ec402ec7353d51ba811c0a085833b55df9588e81c951508b6b0523"} Jan 30 22:02:20 crc kubenswrapper[4869]: I0130 22:02:20.556994 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerDied","Data":"fc93ec2172f48742e12eb47bcd97f2a3b220a955cf84e25a3daaa9b3a7d88d6d"} Jan 30 22:02:20 crc kubenswrapper[4869]: I0130 22:02:20.557037 4869 scope.go:117] "RemoveContainer" containerID="aac066990014154f8593ca5e8916003e4c42e5918b39f71959f082a99fd94624" Jan 30 22:02:20 crc kubenswrapper[4869]: I0130 22:02:20.557698 4869 scope.go:117] "RemoveContainer" containerID="fc93ec2172f48742e12eb47bcd97f2a3b220a955cf84e25a3daaa9b3a7d88d6d" Jan 30 22:02:20 crc kubenswrapper[4869]: I0130 22:02:20.557732 4869 scope.go:117] "RemoveContainer" containerID="0bef61d842ec402ec7353d51ba811c0a085833b55df9588e81c951508b6b0523" Jan 30 22:02:20 crc kubenswrapper[4869]: E0130 22:02:20.558248 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" Jan 30 22:02:20 crc kubenswrapper[4869]: I0130 22:02:20.615934 4869 scope.go:117] "RemoveContainer" containerID="1816da16ba6f132ed8735cf7ffd5d756898c2f411035e3f16b271ec139e90c95" Jan 30 22:02:21 crc kubenswrapper[4869]: I0130 22:02:21.194989 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:21 crc kubenswrapper[4869]: I0130 22:02:21.567136 4869 scope.go:117] "RemoveContainer" containerID="fc93ec2172f48742e12eb47bcd97f2a3b220a955cf84e25a3daaa9b3a7d88d6d" Jan 30 22:02:21 crc kubenswrapper[4869]: I0130 22:02:21.567548 4869 scope.go:117] "RemoveContainer" containerID="0bef61d842ec402ec7353d51ba811c0a085833b55df9588e81c951508b6b0523" Jan 30 22:02:21 crc kubenswrapper[4869]: E0130 22:02:21.567939 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" Jan 30 22:02:22 crc kubenswrapper[4869]: I0130 22:02:22.194179 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:22 crc kubenswrapper[4869]: I0130 22:02:22.194245 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:22 crc kubenswrapper[4869]: I0130 22:02:22.324792 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:02:22 crc kubenswrapper[4869]: I0130 22:02:22.433854 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:02:22 crc kubenswrapper[4869]: I0130 22:02:22.571808 4869 scope.go:117] "RemoveContainer" containerID="fc93ec2172f48742e12eb47bcd97f2a3b220a955cf84e25a3daaa9b3a7d88d6d" Jan 30 22:02:22 crc kubenswrapper[4869]: I0130 22:02:22.572343 4869 scope.go:117] "RemoveContainer" containerID="0bef61d842ec402ec7353d51ba811c0a085833b55df9588e81c951508b6b0523" Jan 30 22:02:22 crc kubenswrapper[4869]: E0130 22:02:22.572616 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" Jan 30 22:02:23 crc kubenswrapper[4869]: I0130 22:02:23.577695 4869 scope.go:117] "RemoveContainer" containerID="fc93ec2172f48742e12eb47bcd97f2a3b220a955cf84e25a3daaa9b3a7d88d6d" Jan 30 22:02:23 crc kubenswrapper[4869]: I0130 22:02:23.578044 4869 scope.go:117] "RemoveContainer" containerID="0bef61d842ec402ec7353d51ba811c0a085833b55df9588e81c951508b6b0523" Jan 30 22:02:23 crc kubenswrapper[4869]: E0130 22:02:23.578298 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" Jan 30 22:02:24 crc kubenswrapper[4869]: I0130 22:02:24.589562 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.448233 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.450258 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.458836 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.601243 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f611a4-7294-4c87-a871-0c720a747866-config-data\") pod \"cinder-scheduler-1\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.601285 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3f611a4-7294-4c87-a871-0c720a747866-etc-machine-id\") pod \"cinder-scheduler-1\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.601340 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkj5l\" (UniqueName: \"kubernetes.io/projected/b3f611a4-7294-4c87-a871-0c720a747866-kube-api-access-zkj5l\") pod \"cinder-scheduler-1\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.601416 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3f611a4-7294-4c87-a871-0c720a747866-config-data-custom\") pod \"cinder-scheduler-1\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.601456 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f611a4-7294-4c87-a871-0c720a747866-scripts\") pod \"cinder-scheduler-1\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.702341 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3f611a4-7294-4c87-a871-0c720a747866-config-data-custom\") pod \"cinder-scheduler-1\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.702404 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f611a4-7294-4c87-a871-0c720a747866-scripts\") pod \"cinder-scheduler-1\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.702462 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f611a4-7294-4c87-a871-0c720a747866-config-data\") pod \"cinder-scheduler-1\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.702479 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3f611a4-7294-4c87-a871-0c720a747866-etc-machine-id\") pod \"cinder-scheduler-1\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.702532 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkj5l\" (UniqueName: \"kubernetes.io/projected/b3f611a4-7294-4c87-a871-0c720a747866-kube-api-access-zkj5l\") pod \"cinder-scheduler-1\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.702586 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3f611a4-7294-4c87-a871-0c720a747866-etc-machine-id\") pod \"cinder-scheduler-1\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.708488 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3f611a4-7294-4c87-a871-0c720a747866-config-data-custom\") pod \"cinder-scheduler-1\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.708627 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f611a4-7294-4c87-a871-0c720a747866-scripts\") pod \"cinder-scheduler-1\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.713563 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f611a4-7294-4c87-a871-0c720a747866-config-data\") pod \"cinder-scheduler-1\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.718681 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkj5l\" (UniqueName: \"kubernetes.io/projected/b3f611a4-7294-4c87-a871-0c720a747866-kube-api-access-zkj5l\") pod \"cinder-scheduler-1\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:26 crc kubenswrapper[4869]: I0130 22:02:26.778949 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:27 crc kubenswrapper[4869]: I0130 22:02:27.271746 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 30 22:02:27 crc kubenswrapper[4869]: I0130 22:02:27.625321 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"b3f611a4-7294-4c87-a871-0c720a747866","Type":"ContainerStarted","Data":"ca420615f735c4b2bcb13c02651043430fe302ae695ef8d96f5ab0461b512afe"} Jan 30 22:02:28 crc kubenswrapper[4869]: I0130 22:02:28.633691 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"b3f611a4-7294-4c87-a871-0c720a747866","Type":"ContainerStarted","Data":"3a1eebe3c7a46af2042a498d59b26e4c80ab05192a83a9112b694b5b855e3642"} Jan 30 22:02:28 crc kubenswrapper[4869]: I0130 22:02:28.634030 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"b3f611a4-7294-4c87-a871-0c720a747866","Type":"ContainerStarted","Data":"93a5ee888a7af3ffd61245d80fdf9f750155c83d02ef63cde6e7caa60262f087"} Jan 30 22:02:28 crc kubenswrapper[4869]: I0130 22:02:28.652490 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-scheduler-1" podStartSLOduration=2.652475427 podStartE2EDuration="2.652475427s" podCreationTimestamp="2026-01-30 22:02:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:28.648684798 +0000 UTC m=+1149.534442823" watchObservedRunningTime="2026-01-30 22:02:28.652475427 +0000 UTC m=+1149.538233452" Jan 30 22:02:31 crc kubenswrapper[4869]: I0130 22:02:31.780082 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:35 crc kubenswrapper[4869]: I0130 22:02:35.877656 4869 scope.go:117] "RemoveContainer" containerID="fc93ec2172f48742e12eb47bcd97f2a3b220a955cf84e25a3daaa9b3a7d88d6d" Jan 30 22:02:35 crc kubenswrapper[4869]: I0130 22:02:35.878410 4869 scope.go:117] "RemoveContainer" containerID="0bef61d842ec402ec7353d51ba811c0a085833b55df9588e81c951508b6b0523" Jan 30 22:02:36 crc kubenswrapper[4869]: I0130 22:02:36.697232 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerStarted","Data":"4ca6ef543ed5bd6ebb8ca9f7da6c793efc6e543796d04ba49445fdd02017e635"} Jan 30 22:02:36 crc kubenswrapper[4869]: I0130 22:02:36.697810 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerStarted","Data":"cd9edc9d7bc98458cffddcf41f1c576b57630b44231cecd36b86ff8d42c5b329"} Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.048348 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.124586 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.126042 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.138409 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.193992 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.271714 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f057715-61ab-4c8f-8839-c80776f31b1e-config-data-custom\") pod \"cinder-scheduler-2\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.271760 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f057715-61ab-4c8f-8839-c80776f31b1e-config-data\") pod \"cinder-scheduler-2\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.271819 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27dmb\" (UniqueName: \"kubernetes.io/projected/9f057715-61ab-4c8f-8839-c80776f31b1e-kube-api-access-27dmb\") pod \"cinder-scheduler-2\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.271840 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f057715-61ab-4c8f-8839-c80776f31b1e-scripts\") pod \"cinder-scheduler-2\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.272043 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f057715-61ab-4c8f-8839-c80776f31b1e-etc-machine-id\") pod \"cinder-scheduler-2\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.373409 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f057715-61ab-4c8f-8839-c80776f31b1e-config-data-custom\") pod \"cinder-scheduler-2\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.373470 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f057715-61ab-4c8f-8839-c80776f31b1e-config-data\") pod \"cinder-scheduler-2\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.373547 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27dmb\" (UniqueName: \"kubernetes.io/projected/9f057715-61ab-4c8f-8839-c80776f31b1e-kube-api-access-27dmb\") pod \"cinder-scheduler-2\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.373579 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f057715-61ab-4c8f-8839-c80776f31b1e-scripts\") pod \"cinder-scheduler-2\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.373635 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f057715-61ab-4c8f-8839-c80776f31b1e-etc-machine-id\") pod \"cinder-scheduler-2\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.373723 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f057715-61ab-4c8f-8839-c80776f31b1e-etc-machine-id\") pod \"cinder-scheduler-2\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.387416 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f057715-61ab-4c8f-8839-c80776f31b1e-scripts\") pod \"cinder-scheduler-2\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.387606 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f057715-61ab-4c8f-8839-c80776f31b1e-config-data-custom\") pod \"cinder-scheduler-2\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.389878 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f057715-61ab-4c8f-8839-c80776f31b1e-config-data\") pod \"cinder-scheduler-2\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.410415 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27dmb\" (UniqueName: \"kubernetes.io/projected/9f057715-61ab-4c8f-8839-c80776f31b1e-kube-api-access-27dmb\") pod \"cinder-scheduler-2\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.440574 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:37 crc kubenswrapper[4869]: I0130 22:02:37.897177 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 30 22:02:38 crc kubenswrapper[4869]: I0130 22:02:38.787696 4869 generic.go:334] "Generic (PLEG): container finished" podID="a748ea99-7369-48cb-8983-9f41ff077f82" containerID="4ca6ef543ed5bd6ebb8ca9f7da6c793efc6e543796d04ba49445fdd02017e635" exitCode=1 Jan 30 22:02:38 crc kubenswrapper[4869]: I0130 22:02:38.788067 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerDied","Data":"4ca6ef543ed5bd6ebb8ca9f7da6c793efc6e543796d04ba49445fdd02017e635"} Jan 30 22:02:38 crc kubenswrapper[4869]: I0130 22:02:38.788107 4869 scope.go:117] "RemoveContainer" containerID="0bef61d842ec402ec7353d51ba811c0a085833b55df9588e81c951508b6b0523" Jan 30 22:02:38 crc kubenswrapper[4869]: I0130 22:02:38.788728 4869 scope.go:117] "RemoveContainer" containerID="4ca6ef543ed5bd6ebb8ca9f7da6c793efc6e543796d04ba49445fdd02017e635" Jan 30 22:02:38 crc kubenswrapper[4869]: E0130 22:02:38.789030 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\"" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" Jan 30 22:02:38 crc kubenswrapper[4869]: I0130 22:02:38.806321 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"9f057715-61ab-4c8f-8839-c80776f31b1e","Type":"ContainerStarted","Data":"deb6b99086a13bcf4bba05843e108c791d972517bc8fb3f7dc143a77968c8575"} Jan 30 22:02:38 crc kubenswrapper[4869]: I0130 22:02:38.806370 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"9f057715-61ab-4c8f-8839-c80776f31b1e","Type":"ContainerStarted","Data":"f513d57b12047ef5b17eed7855d0cd79cd6eb3c880a142586b51145607b82baa"} Jan 30 22:02:39 crc kubenswrapper[4869]: I0130 22:02:39.814825 4869 generic.go:334] "Generic (PLEG): container finished" podID="a748ea99-7369-48cb-8983-9f41ff077f82" containerID="cd9edc9d7bc98458cffddcf41f1c576b57630b44231cecd36b86ff8d42c5b329" exitCode=1 Jan 30 22:02:39 crc kubenswrapper[4869]: I0130 22:02:39.814876 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerDied","Data":"cd9edc9d7bc98458cffddcf41f1c576b57630b44231cecd36b86ff8d42c5b329"} Jan 30 22:02:39 crc kubenswrapper[4869]: I0130 22:02:39.815193 4869 scope.go:117] "RemoveContainer" containerID="fc93ec2172f48742e12eb47bcd97f2a3b220a955cf84e25a3daaa9b3a7d88d6d" Jan 30 22:02:39 crc kubenswrapper[4869]: I0130 22:02:39.815703 4869 scope.go:117] "RemoveContainer" containerID="cd9edc9d7bc98458cffddcf41f1c576b57630b44231cecd36b86ff8d42c5b329" Jan 30 22:02:39 crc kubenswrapper[4869]: I0130 22:02:39.815741 4869 scope.go:117] "RemoveContainer" containerID="4ca6ef543ed5bd6ebb8ca9f7da6c793efc6e543796d04ba49445fdd02017e635" Jan 30 22:02:39 crc kubenswrapper[4869]: E0130 22:02:39.816011 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" Jan 30 22:02:39 crc kubenswrapper[4869]: I0130 22:02:39.816854 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"9f057715-61ab-4c8f-8839-c80776f31b1e","Type":"ContainerStarted","Data":"f1f9ad0915e343c780125090f87a698c2765623cf448b74e85bc26b6461a8e3b"} Jan 30 22:02:39 crc kubenswrapper[4869]: I0130 22:02:39.858215 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-scheduler-2" podStartSLOduration=2.858194222 podStartE2EDuration="2.858194222s" podCreationTimestamp="2026-01-30 22:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:39.857148268 +0000 UTC m=+1160.742906293" watchObservedRunningTime="2026-01-30 22:02:39.858194222 +0000 UTC m=+1160.743952257" Jan 30 22:02:42 crc kubenswrapper[4869]: I0130 22:02:42.193848 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:42 crc kubenswrapper[4869]: I0130 22:02:42.194592 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:02:42 crc kubenswrapper[4869]: I0130 22:02:42.195076 4869 scope.go:117] "RemoveContainer" containerID="cd9edc9d7bc98458cffddcf41f1c576b57630b44231cecd36b86ff8d42c5b329" Jan 30 22:02:42 crc kubenswrapper[4869]: I0130 22:02:42.195093 4869 scope.go:117] "RemoveContainer" containerID="4ca6ef543ed5bd6ebb8ca9f7da6c793efc6e543796d04ba49445fdd02017e635" Jan 30 22:02:42 crc kubenswrapper[4869]: E0130 22:02:42.195486 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" Jan 30 22:02:42 crc kubenswrapper[4869]: I0130 22:02:42.441585 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:42 crc kubenswrapper[4869]: I0130 22:02:42.844065 4869 scope.go:117] "RemoveContainer" containerID="cd9edc9d7bc98458cffddcf41f1c576b57630b44231cecd36b86ff8d42c5b329" Jan 30 22:02:42 crc kubenswrapper[4869]: I0130 22:02:42.844098 4869 scope.go:117] "RemoveContainer" containerID="4ca6ef543ed5bd6ebb8ca9f7da6c793efc6e543796d04ba49445fdd02017e635" Jan 30 22:02:42 crc kubenswrapper[4869]: E0130 22:02:42.844357 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" Jan 30 22:02:47 crc kubenswrapper[4869]: I0130 22:02:47.684270 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:48 crc kubenswrapper[4869]: I0130 22:02:48.895281 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 30 22:02:48 crc kubenswrapper[4869]: I0130 22:02:48.895546 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-2" podUID="9f057715-61ab-4c8f-8839-c80776f31b1e" containerName="cinder-scheduler" containerID="cri-o://deb6b99086a13bcf4bba05843e108c791d972517bc8fb3f7dc143a77968c8575" gracePeriod=30 Jan 30 22:02:48 crc kubenswrapper[4869]: I0130 22:02:48.895698 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-2" podUID="9f057715-61ab-4c8f-8839-c80776f31b1e" containerName="probe" containerID="cri-o://f1f9ad0915e343c780125090f87a698c2765623cf448b74e85bc26b6461a8e3b" gracePeriod=30 Jan 30 22:02:50 crc kubenswrapper[4869]: I0130 22:02:50.900045 4869 generic.go:334] "Generic (PLEG): container finished" podID="9f057715-61ab-4c8f-8839-c80776f31b1e" containerID="f1f9ad0915e343c780125090f87a698c2765623cf448b74e85bc26b6461a8e3b" exitCode=0 Jan 30 22:02:50 crc kubenswrapper[4869]: I0130 22:02:50.900106 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"9f057715-61ab-4c8f-8839-c80776f31b1e","Type":"ContainerDied","Data":"f1f9ad0915e343c780125090f87a698c2765623cf448b74e85bc26b6461a8e3b"} Jan 30 22:02:53 crc kubenswrapper[4869]: I0130 22:02:53.877135 4869 scope.go:117] "RemoveContainer" containerID="cd9edc9d7bc98458cffddcf41f1c576b57630b44231cecd36b86ff8d42c5b329" Jan 30 22:02:53 crc kubenswrapper[4869]: I0130 22:02:53.877453 4869 scope.go:117] "RemoveContainer" containerID="4ca6ef543ed5bd6ebb8ca9f7da6c793efc6e543796d04ba49445fdd02017e635" Jan 30 22:02:53 crc kubenswrapper[4869]: E0130 22:02:53.877649 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" Jan 30 22:02:54 crc kubenswrapper[4869]: I0130 22:02:54.931732 4869 generic.go:334] "Generic (PLEG): container finished" podID="9f057715-61ab-4c8f-8839-c80776f31b1e" containerID="deb6b99086a13bcf4bba05843e108c791d972517bc8fb3f7dc143a77968c8575" exitCode=0 Jan 30 22:02:54 crc kubenswrapper[4869]: I0130 22:02:54.931827 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"9f057715-61ab-4c8f-8839-c80776f31b1e","Type":"ContainerDied","Data":"deb6b99086a13bcf4bba05843e108c791d972517bc8fb3f7dc143a77968c8575"} Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.070376 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.173018 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f057715-61ab-4c8f-8839-c80776f31b1e-config-data-custom\") pod \"9f057715-61ab-4c8f-8839-c80776f31b1e\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.173138 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f057715-61ab-4c8f-8839-c80776f31b1e-config-data\") pod \"9f057715-61ab-4c8f-8839-c80776f31b1e\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.173223 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27dmb\" (UniqueName: \"kubernetes.io/projected/9f057715-61ab-4c8f-8839-c80776f31b1e-kube-api-access-27dmb\") pod \"9f057715-61ab-4c8f-8839-c80776f31b1e\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.173306 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f057715-61ab-4c8f-8839-c80776f31b1e-scripts\") pod \"9f057715-61ab-4c8f-8839-c80776f31b1e\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.173339 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f057715-61ab-4c8f-8839-c80776f31b1e-etc-machine-id\") pod \"9f057715-61ab-4c8f-8839-c80776f31b1e\" (UID: \"9f057715-61ab-4c8f-8839-c80776f31b1e\") " Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.173559 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f057715-61ab-4c8f-8839-c80776f31b1e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9f057715-61ab-4c8f-8839-c80776f31b1e" (UID: "9f057715-61ab-4c8f-8839-c80776f31b1e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.174260 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f057715-61ab-4c8f-8839-c80776f31b1e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.178604 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f057715-61ab-4c8f-8839-c80776f31b1e-scripts" (OuterVolumeSpecName: "scripts") pod "9f057715-61ab-4c8f-8839-c80776f31b1e" (UID: "9f057715-61ab-4c8f-8839-c80776f31b1e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.178769 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f057715-61ab-4c8f-8839-c80776f31b1e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9f057715-61ab-4c8f-8839-c80776f31b1e" (UID: "9f057715-61ab-4c8f-8839-c80776f31b1e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.178921 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f057715-61ab-4c8f-8839-c80776f31b1e-kube-api-access-27dmb" (OuterVolumeSpecName: "kube-api-access-27dmb") pod "9f057715-61ab-4c8f-8839-c80776f31b1e" (UID: "9f057715-61ab-4c8f-8839-c80776f31b1e"). InnerVolumeSpecName "kube-api-access-27dmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.235090 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f057715-61ab-4c8f-8839-c80776f31b1e-config-data" (OuterVolumeSpecName: "config-data") pod "9f057715-61ab-4c8f-8839-c80776f31b1e" (UID: "9f057715-61ab-4c8f-8839-c80776f31b1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.275359 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27dmb\" (UniqueName: \"kubernetes.io/projected/9f057715-61ab-4c8f-8839-c80776f31b1e-kube-api-access-27dmb\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.275393 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f057715-61ab-4c8f-8839-c80776f31b1e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.275403 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f057715-61ab-4c8f-8839-c80776f31b1e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.275411 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f057715-61ab-4c8f-8839-c80776f31b1e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.940968 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"9f057715-61ab-4c8f-8839-c80776f31b1e","Type":"ContainerDied","Data":"f513d57b12047ef5b17eed7855d0cd79cd6eb3c880a142586b51145607b82baa"} Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.941079 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.941297 4869 scope.go:117] "RemoveContainer" containerID="f1f9ad0915e343c780125090f87a698c2765623cf448b74e85bc26b6461a8e3b" Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.966910 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.973207 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.974470 4869 scope.go:117] "RemoveContainer" containerID="deb6b99086a13bcf4bba05843e108c791d972517bc8fb3f7dc143a77968c8575" Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.979857 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.980099 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-1" podUID="b3f611a4-7294-4c87-a871-0c720a747866" containerName="cinder-scheduler" containerID="cri-o://93a5ee888a7af3ffd61245d80fdf9f750155c83d02ef63cde6e7caa60262f087" gracePeriod=30 Jan 30 22:02:55 crc kubenswrapper[4869]: I0130 22:02:55.980209 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-1" podUID="b3f611a4-7294-4c87-a871-0c720a747866" containerName="probe" containerID="cri-o://3a1eebe3c7a46af2042a498d59b26e4c80ab05192a83a9112b694b5b855e3642" gracePeriod=30 Jan 30 22:02:56 crc kubenswrapper[4869]: I0130 22:02:56.953916 4869 generic.go:334] "Generic (PLEG): container finished" podID="b3f611a4-7294-4c87-a871-0c720a747866" containerID="3a1eebe3c7a46af2042a498d59b26e4c80ab05192a83a9112b694b5b855e3642" exitCode=0 Jan 30 22:02:56 crc kubenswrapper[4869]: I0130 22:02:56.954115 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"b3f611a4-7294-4c87-a871-0c720a747866","Type":"ContainerDied","Data":"3a1eebe3c7a46af2042a498d59b26e4c80ab05192a83a9112b694b5b855e3642"} Jan 30 22:02:57 crc kubenswrapper[4869]: I0130 22:02:57.886980 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f057715-61ab-4c8f-8839-c80776f31b1e" path="/var/lib/kubelet/pods/9f057715-61ab-4c8f-8839-c80776f31b1e/volumes" Jan 30 22:02:59 crc kubenswrapper[4869]: I0130 22:02:59.952564 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:02:59 crc kubenswrapper[4869]: I0130 22:02:59.985477 4869 generic.go:334] "Generic (PLEG): container finished" podID="b3f611a4-7294-4c87-a871-0c720a747866" containerID="93a5ee888a7af3ffd61245d80fdf9f750155c83d02ef63cde6e7caa60262f087" exitCode=0 Jan 30 22:02:59 crc kubenswrapper[4869]: I0130 22:02:59.985535 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"b3f611a4-7294-4c87-a871-0c720a747866","Type":"ContainerDied","Data":"93a5ee888a7af3ffd61245d80fdf9f750155c83d02ef63cde6e7caa60262f087"} Jan 30 22:02:59 crc kubenswrapper[4869]: I0130 22:02:59.985566 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"b3f611a4-7294-4c87-a871-0c720a747866","Type":"ContainerDied","Data":"ca420615f735c4b2bcb13c02651043430fe302ae695ef8d96f5ab0461b512afe"} Jan 30 22:02:59 crc kubenswrapper[4869]: I0130 22:02:59.985592 4869 scope.go:117] "RemoveContainer" containerID="3a1eebe3c7a46af2042a498d59b26e4c80ab05192a83a9112b694b5b855e3642" Jan 30 22:02:59 crc kubenswrapper[4869]: I0130 22:02:59.985645 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.022056 4869 scope.go:117] "RemoveContainer" containerID="93a5ee888a7af3ffd61245d80fdf9f750155c83d02ef63cde6e7caa60262f087" Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.042842 4869 scope.go:117] "RemoveContainer" containerID="3a1eebe3c7a46af2042a498d59b26e4c80ab05192a83a9112b694b5b855e3642" Jan 30 22:03:00 crc kubenswrapper[4869]: E0130 22:03:00.043589 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a1eebe3c7a46af2042a498d59b26e4c80ab05192a83a9112b694b5b855e3642\": container with ID starting with 3a1eebe3c7a46af2042a498d59b26e4c80ab05192a83a9112b694b5b855e3642 not found: ID does not exist" containerID="3a1eebe3c7a46af2042a498d59b26e4c80ab05192a83a9112b694b5b855e3642" Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.043667 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a1eebe3c7a46af2042a498d59b26e4c80ab05192a83a9112b694b5b855e3642"} err="failed to get container status \"3a1eebe3c7a46af2042a498d59b26e4c80ab05192a83a9112b694b5b855e3642\": rpc error: code = NotFound desc = could not find container \"3a1eebe3c7a46af2042a498d59b26e4c80ab05192a83a9112b694b5b855e3642\": container with ID starting with 3a1eebe3c7a46af2042a498d59b26e4c80ab05192a83a9112b694b5b855e3642 not found: ID does not exist" Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.043705 4869 scope.go:117] "RemoveContainer" containerID="93a5ee888a7af3ffd61245d80fdf9f750155c83d02ef63cde6e7caa60262f087" Jan 30 22:03:00 crc kubenswrapper[4869]: E0130 22:03:00.044721 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93a5ee888a7af3ffd61245d80fdf9f750155c83d02ef63cde6e7caa60262f087\": container with ID starting with 93a5ee888a7af3ffd61245d80fdf9f750155c83d02ef63cde6e7caa60262f087 not found: ID does not exist" containerID="93a5ee888a7af3ffd61245d80fdf9f750155c83d02ef63cde6e7caa60262f087" Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.044771 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93a5ee888a7af3ffd61245d80fdf9f750155c83d02ef63cde6e7caa60262f087"} err="failed to get container status \"93a5ee888a7af3ffd61245d80fdf9f750155c83d02ef63cde6e7caa60262f087\": rpc error: code = NotFound desc = could not find container \"93a5ee888a7af3ffd61245d80fdf9f750155c83d02ef63cde6e7caa60262f087\": container with ID starting with 93a5ee888a7af3ffd61245d80fdf9f750155c83d02ef63cde6e7caa60262f087 not found: ID does not exist" Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.058170 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3f611a4-7294-4c87-a871-0c720a747866-config-data-custom\") pod \"b3f611a4-7294-4c87-a871-0c720a747866\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.058378 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3f611a4-7294-4c87-a871-0c720a747866-etc-machine-id\") pod \"b3f611a4-7294-4c87-a871-0c720a747866\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.058414 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f611a4-7294-4c87-a871-0c720a747866-scripts\") pod \"b3f611a4-7294-4c87-a871-0c720a747866\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.058510 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f611a4-7294-4c87-a871-0c720a747866-config-data\") pod \"b3f611a4-7294-4c87-a871-0c720a747866\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.058545 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkj5l\" (UniqueName: \"kubernetes.io/projected/b3f611a4-7294-4c87-a871-0c720a747866-kube-api-access-zkj5l\") pod \"b3f611a4-7294-4c87-a871-0c720a747866\" (UID: \"b3f611a4-7294-4c87-a871-0c720a747866\") " Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.059763 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3f611a4-7294-4c87-a871-0c720a747866-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b3f611a4-7294-4c87-a871-0c720a747866" (UID: "b3f611a4-7294-4c87-a871-0c720a747866"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.065062 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f611a4-7294-4c87-a871-0c720a747866-scripts" (OuterVolumeSpecName: "scripts") pod "b3f611a4-7294-4c87-a871-0c720a747866" (UID: "b3f611a4-7294-4c87-a871-0c720a747866"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.065441 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f611a4-7294-4c87-a871-0c720a747866-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b3f611a4-7294-4c87-a871-0c720a747866" (UID: "b3f611a4-7294-4c87-a871-0c720a747866"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.065590 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3f611a4-7294-4c87-a871-0c720a747866-kube-api-access-zkj5l" (OuterVolumeSpecName: "kube-api-access-zkj5l") pod "b3f611a4-7294-4c87-a871-0c720a747866" (UID: "b3f611a4-7294-4c87-a871-0c720a747866"). InnerVolumeSpecName "kube-api-access-zkj5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.129132 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f611a4-7294-4c87-a871-0c720a747866-config-data" (OuterVolumeSpecName: "config-data") pod "b3f611a4-7294-4c87-a871-0c720a747866" (UID: "b3f611a4-7294-4c87-a871-0c720a747866"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.159971 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3f611a4-7294-4c87-a871-0c720a747866-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.160004 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3f611a4-7294-4c87-a871-0c720a747866-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.160013 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f611a4-7294-4c87-a871-0c720a747866-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.160021 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f611a4-7294-4c87-a871-0c720a747866-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.160030 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkj5l\" (UniqueName: \"kubernetes.io/projected/b3f611a4-7294-4c87-a871-0c720a747866-kube-api-access-zkj5l\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.325995 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 30 22:03:00 crc kubenswrapper[4869]: I0130 22:03:00.331699 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.242051 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 30 22:03:01 crc kubenswrapper[4869]: E0130 22:03:01.242693 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f611a4-7294-4c87-a871-0c720a747866" containerName="cinder-scheduler" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.242711 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f611a4-7294-4c87-a871-0c720a747866" containerName="cinder-scheduler" Jan 30 22:03:01 crc kubenswrapper[4869]: E0130 22:03:01.242734 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f057715-61ab-4c8f-8839-c80776f31b1e" containerName="probe" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.242742 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f057715-61ab-4c8f-8839-c80776f31b1e" containerName="probe" Jan 30 22:03:01 crc kubenswrapper[4869]: E0130 22:03:01.242757 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f611a4-7294-4c87-a871-0c720a747866" containerName="probe" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.242768 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f611a4-7294-4c87-a871-0c720a747866" containerName="probe" Jan 30 22:03:01 crc kubenswrapper[4869]: E0130 22:03:01.242778 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f057715-61ab-4c8f-8839-c80776f31b1e" containerName="cinder-scheduler" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.242787 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f057715-61ab-4c8f-8839-c80776f31b1e" containerName="cinder-scheduler" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.242965 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f057715-61ab-4c8f-8839-c80776f31b1e" containerName="cinder-scheduler" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.242981 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f611a4-7294-4c87-a871-0c720a747866" containerName="probe" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.242999 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f057715-61ab-4c8f-8839-c80776f31b1e" containerName="probe" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.243012 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f611a4-7294-4c87-a871-0c720a747866" containerName="cinder-scheduler" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.243981 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.251732 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.378796 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-lib-modules\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.378850 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-sys\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.378876 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-dev\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.378920 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-etc-iscsi\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.378946 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad6667e-bb64-479d-9ae9-17a4c167f3da-config-data\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.378994 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-etc-machine-id\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.379008 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-var-locks-brick\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.379026 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-var-lib-cinder\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.379092 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ad6667e-bb64-479d-9ae9-17a4c167f3da-config-data-custom\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.379120 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-var-locks-cinder\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.379140 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5x4x\" (UniqueName: \"kubernetes.io/projected/6ad6667e-bb64-479d-9ae9-17a4c167f3da-kube-api-access-h5x4x\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.379160 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad6667e-bb64-479d-9ae9-17a4c167f3da-scripts\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.379203 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-etc-nvme\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.379223 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-run\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.479756 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-run\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.479821 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-run\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.479832 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-lib-modules\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.479857 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-sys\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.479877 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-dev\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.479886 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-lib-modules\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.479962 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-dev\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.479967 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-etc-iscsi\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.479946 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-etc-iscsi\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.480000 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad6667e-bb64-479d-9ae9-17a4c167f3da-config-data\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.479943 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-sys\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.480042 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-etc-machine-id\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.480023 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-etc-machine-id\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.480154 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-var-lib-cinder\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.480179 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-var-locks-brick\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.480213 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ad6667e-bb64-479d-9ae9-17a4c167f3da-config-data-custom\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.480246 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-var-locks-cinder\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.480269 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad6667e-bb64-479d-9ae9-17a4c167f3da-scripts\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.480282 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-var-locks-brick\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.480292 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5x4x\" (UniqueName: \"kubernetes.io/projected/6ad6667e-bb64-479d-9ae9-17a4c167f3da-kube-api-access-h5x4x\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.480317 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-var-locks-cinder\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.480337 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-etc-nvme\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.480430 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-etc-nvme\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.480584 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-var-lib-cinder\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.484658 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ad6667e-bb64-479d-9ae9-17a4c167f3da-config-data-custom\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.484773 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad6667e-bb64-479d-9ae9-17a4c167f3da-config-data\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.485406 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad6667e-bb64-479d-9ae9-17a4c167f3da-scripts\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.496971 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5x4x\" (UniqueName: \"kubernetes.io/projected/6ad6667e-bb64-479d-9ae9-17a4c167f3da-kube-api-access-h5x4x\") pod \"cinder-backup-1\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.567291 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.889803 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3f611a4-7294-4c87-a871-0c720a747866" path="/var/lib/kubelet/pods/b3f611a4-7294-4c87-a871-0c720a747866/volumes" Jan 30 22:03:01 crc kubenswrapper[4869]: I0130 22:03:01.964383 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 30 22:03:01 crc kubenswrapper[4869]: W0130 22:03:01.968346 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ad6667e_bb64_479d_9ae9_17a4c167f3da.slice/crio-5d62dfd444e65ae8a3dfc91c05880a6ee16e386fdf43bbcb24dd7a160ed99ada WatchSource:0}: Error finding container 5d62dfd444e65ae8a3dfc91c05880a6ee16e386fdf43bbcb24dd7a160ed99ada: Status 404 returned error can't find the container with id 5d62dfd444e65ae8a3dfc91c05880a6ee16e386fdf43bbcb24dd7a160ed99ada Jan 30 22:03:02 crc kubenswrapper[4869]: I0130 22:03:02.007582 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"6ad6667e-bb64-479d-9ae9-17a4c167f3da","Type":"ContainerStarted","Data":"5d62dfd444e65ae8a3dfc91c05880a6ee16e386fdf43bbcb24dd7a160ed99ada"} Jan 30 22:03:03 crc kubenswrapper[4869]: I0130 22:03:03.035126 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"6ad6667e-bb64-479d-9ae9-17a4c167f3da","Type":"ContainerStarted","Data":"889a4e40ca0bf7d7fd2978369c2680e1dd8aade60101a42bc974ed58e3af9020"} Jan 30 22:03:03 crc kubenswrapper[4869]: I0130 22:03:03.035672 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"6ad6667e-bb64-479d-9ae9-17a4c167f3da","Type":"ContainerStarted","Data":"0ac13b130f05fb8d166e6d2cf29202267788a2ec4c62ce59dde8d62e7d4ae2be"} Jan 30 22:03:03 crc kubenswrapper[4869]: I0130 22:03:03.054543 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-backup-1" podStartSLOduration=2.05452718 podStartE2EDuration="2.05452718s" podCreationTimestamp="2026-01-30 22:03:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:03.052373892 +0000 UTC m=+1183.938131917" watchObservedRunningTime="2026-01-30 22:03:03.05452718 +0000 UTC m=+1183.940285205" Jan 30 22:03:06 crc kubenswrapper[4869]: I0130 22:03:06.567817 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:07 crc kubenswrapper[4869]: I0130 22:03:07.877517 4869 scope.go:117] "RemoveContainer" containerID="cd9edc9d7bc98458cffddcf41f1c576b57630b44231cecd36b86ff8d42c5b329" Jan 30 22:03:07 crc kubenswrapper[4869]: I0130 22:03:07.877801 4869 scope.go:117] "RemoveContainer" containerID="4ca6ef543ed5bd6ebb8ca9f7da6c793efc6e543796d04ba49445fdd02017e635" Jan 30 22:03:08 crc kubenswrapper[4869]: I0130 22:03:08.073040 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerStarted","Data":"bae8ee2d2ea3ae8f98b57e2c4700d1cbe1214043f47147cae81508bdb3e4db03"} Jan 30 22:03:09 crc kubenswrapper[4869]: I0130 22:03:09.082608 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerStarted","Data":"f77f890fcdaf60c1c7106e84a9a3bffb31f68618c74dd384835bde05492c7422"} Jan 30 22:03:10 crc kubenswrapper[4869]: I0130 22:03:10.092388 4869 generic.go:334] "Generic (PLEG): container finished" podID="a748ea99-7369-48cb-8983-9f41ff077f82" containerID="f77f890fcdaf60c1c7106e84a9a3bffb31f68618c74dd384835bde05492c7422" exitCode=1 Jan 30 22:03:10 crc kubenswrapper[4869]: I0130 22:03:10.092421 4869 generic.go:334] "Generic (PLEG): container finished" podID="a748ea99-7369-48cb-8983-9f41ff077f82" containerID="bae8ee2d2ea3ae8f98b57e2c4700d1cbe1214043f47147cae81508bdb3e4db03" exitCode=1 Jan 30 22:03:10 crc kubenswrapper[4869]: I0130 22:03:10.092425 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerDied","Data":"f77f890fcdaf60c1c7106e84a9a3bffb31f68618c74dd384835bde05492c7422"} Jan 30 22:03:10 crc kubenswrapper[4869]: I0130 22:03:10.092485 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerDied","Data":"bae8ee2d2ea3ae8f98b57e2c4700d1cbe1214043f47147cae81508bdb3e4db03"} Jan 30 22:03:10 crc kubenswrapper[4869]: I0130 22:03:10.092505 4869 scope.go:117] "RemoveContainer" containerID="4ca6ef543ed5bd6ebb8ca9f7da6c793efc6e543796d04ba49445fdd02017e635" Jan 30 22:03:10 crc kubenswrapper[4869]: I0130 22:03:10.092958 4869 scope.go:117] "RemoveContainer" containerID="bae8ee2d2ea3ae8f98b57e2c4700d1cbe1214043f47147cae81508bdb3e4db03" Jan 30 22:03:10 crc kubenswrapper[4869]: I0130 22:03:10.092989 4869 scope.go:117] "RemoveContainer" containerID="f77f890fcdaf60c1c7106e84a9a3bffb31f68618c74dd384835bde05492c7422" Jan 30 22:03:10 crc kubenswrapper[4869]: E0130 22:03:10.093203 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" Jan 30 22:03:10 crc kubenswrapper[4869]: I0130 22:03:10.155324 4869 scope.go:117] "RemoveContainer" containerID="cd9edc9d7bc98458cffddcf41f1c576b57630b44231cecd36b86ff8d42c5b329" Jan 30 22:03:11 crc kubenswrapper[4869]: I0130 22:03:11.103597 4869 scope.go:117] "RemoveContainer" containerID="bae8ee2d2ea3ae8f98b57e2c4700d1cbe1214043f47147cae81508bdb3e4db03" Jan 30 22:03:11 crc kubenswrapper[4869]: I0130 22:03:11.103863 4869 scope.go:117] "RemoveContainer" containerID="f77f890fcdaf60c1c7106e84a9a3bffb31f68618c74dd384835bde05492c7422" Jan 30 22:03:11 crc kubenswrapper[4869]: E0130 22:03:11.104121 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" Jan 30 22:03:11 crc kubenswrapper[4869]: I0130 22:03:11.826567 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:11 crc kubenswrapper[4869]: I0130 22:03:11.893890 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 30 22:03:11 crc kubenswrapper[4869]: I0130 22:03:11.895202 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:11 crc kubenswrapper[4869]: I0130 22:03:11.902442 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.053682 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-var-locks-cinder\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.053761 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-dev\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.054351 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-lib-modules\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.054426 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-etc-nvme\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.054451 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bca9024a-e6a5-4824-8afa-6754d2151143-config-data-custom\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.054475 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-etc-iscsi\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.054498 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-var-locks-brick\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.054542 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bca9024a-e6a5-4824-8afa-6754d2151143-scripts\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.054711 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-run\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.054831 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-sys\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.054912 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca9024a-e6a5-4824-8afa-6754d2151143-config-data\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.055016 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-etc-machine-id\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.055063 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqr5j\" (UniqueName: \"kubernetes.io/projected/bca9024a-e6a5-4824-8afa-6754d2151143-kube-api-access-cqr5j\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.055112 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-var-lib-cinder\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157238 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-var-locks-cinder\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157302 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-dev\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157322 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-lib-modules\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157357 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-etc-nvme\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157380 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bca9024a-e6a5-4824-8afa-6754d2151143-config-data-custom\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157410 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-var-locks-brick\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157414 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-dev\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157434 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-etc-iscsi\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157437 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-lib-modules\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157479 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bca9024a-e6a5-4824-8afa-6754d2151143-scripts\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157501 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-var-locks-brick\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157515 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-run\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157450 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-var-locks-cinder\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157520 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-etc-nvme\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157540 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-sys\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157550 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-etc-iscsi\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157583 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-sys\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157567 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca9024a-e6a5-4824-8afa-6754d2151143-config-data\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157566 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-run\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157690 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-etc-machine-id\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157734 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqr5j\" (UniqueName: \"kubernetes.io/projected/bca9024a-e6a5-4824-8afa-6754d2151143-kube-api-access-cqr5j\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157765 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-var-lib-cinder\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157926 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-var-lib-cinder\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.157952 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-etc-machine-id\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.163973 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bca9024a-e6a5-4824-8afa-6754d2151143-scripts\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.164256 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bca9024a-e6a5-4824-8afa-6754d2151143-config-data-custom\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.164337 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca9024a-e6a5-4824-8afa-6754d2151143-config-data\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.178049 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqr5j\" (UniqueName: \"kubernetes.io/projected/bca9024a-e6a5-4824-8afa-6754d2151143-kube-api-access-cqr5j\") pod \"cinder-backup-2\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.194290 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.194359 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.195070 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.195156 4869 scope.go:117] "RemoveContainer" containerID="bae8ee2d2ea3ae8f98b57e2c4700d1cbe1214043f47147cae81508bdb3e4db03" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.195180 4869 scope.go:117] "RemoveContainer" containerID="f77f890fcdaf60c1c7106e84a9a3bffb31f68618c74dd384835bde05492c7422" Jan 30 22:03:12 crc kubenswrapper[4869]: E0130 22:03:12.195534 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.222266 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:12 crc kubenswrapper[4869]: I0130 22:03:12.412585 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 30 22:03:12 crc kubenswrapper[4869]: W0130 22:03:12.416408 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbca9024a_e6a5_4824_8afa_6754d2151143.slice/crio-b635bcb303b8853e7219509be57af2d70803368a78f3d137edaf7599e15ab1e8 WatchSource:0}: Error finding container b635bcb303b8853e7219509be57af2d70803368a78f3d137edaf7599e15ab1e8: Status 404 returned error can't find the container with id b635bcb303b8853e7219509be57af2d70803368a78f3d137edaf7599e15ab1e8 Jan 30 22:03:13 crc kubenswrapper[4869]: I0130 22:03:13.121154 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"bca9024a-e6a5-4824-8afa-6754d2151143","Type":"ContainerStarted","Data":"55d68bf5f27721dd02131489eb5129ce472870070dad49c238cfa7d5525de31c"} Jan 30 22:03:13 crc kubenswrapper[4869]: I0130 22:03:13.121677 4869 scope.go:117] "RemoveContainer" containerID="bae8ee2d2ea3ae8f98b57e2c4700d1cbe1214043f47147cae81508bdb3e4db03" Jan 30 22:03:13 crc kubenswrapper[4869]: I0130 22:03:13.121690 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"bca9024a-e6a5-4824-8afa-6754d2151143","Type":"ContainerStarted","Data":"7fc018d87eacac2281244821cbed3a2d667d7be2b2db85ccb8193fc672e00b39"} Jan 30 22:03:13 crc kubenswrapper[4869]: I0130 22:03:13.121704 4869 scope.go:117] "RemoveContainer" containerID="f77f890fcdaf60c1c7106e84a9a3bffb31f68618c74dd384835bde05492c7422" Jan 30 22:03:13 crc kubenswrapper[4869]: I0130 22:03:13.121708 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"bca9024a-e6a5-4824-8afa-6754d2151143","Type":"ContainerStarted","Data":"b635bcb303b8853e7219509be57af2d70803368a78f3d137edaf7599e15ab1e8"} Jan 30 22:03:13 crc kubenswrapper[4869]: E0130 22:03:13.121959 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" Jan 30 22:03:13 crc kubenswrapper[4869]: I0130 22:03:13.140122 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-backup-2" podStartSLOduration=2.140103634 podStartE2EDuration="2.140103634s" podCreationTimestamp="2026-01-30 22:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:13.139491545 +0000 UTC m=+1194.025249580" watchObservedRunningTime="2026-01-30 22:03:13.140103634 +0000 UTC m=+1194.025861649" Jan 30 22:03:17 crc kubenswrapper[4869]: I0130 22:03:17.223364 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:17 crc kubenswrapper[4869]: I0130 22:03:17.489397 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:18 crc kubenswrapper[4869]: I0130 22:03:18.599930 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 30 22:03:19 crc kubenswrapper[4869]: I0130 22:03:19.157037 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-2" podUID="bca9024a-e6a5-4824-8afa-6754d2151143" containerName="cinder-backup" containerID="cri-o://7fc018d87eacac2281244821cbed3a2d667d7be2b2db85ccb8193fc672e00b39" gracePeriod=30 Jan 30 22:03:19 crc kubenswrapper[4869]: I0130 22:03:19.157104 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-2" podUID="bca9024a-e6a5-4824-8afa-6754d2151143" containerName="probe" containerID="cri-o://55d68bf5f27721dd02131489eb5129ce472870070dad49c238cfa7d5525de31c" gracePeriod=30 Jan 30 22:03:20 crc kubenswrapper[4869]: I0130 22:03:20.171302 4869 generic.go:334] "Generic (PLEG): container finished" podID="bca9024a-e6a5-4824-8afa-6754d2151143" containerID="55d68bf5f27721dd02131489eb5129ce472870070dad49c238cfa7d5525de31c" exitCode=0 Jan 30 22:03:20 crc kubenswrapper[4869]: I0130 22:03:20.171545 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"bca9024a-e6a5-4824-8afa-6754d2151143","Type":"ContainerDied","Data":"55d68bf5f27721dd02131489eb5129ce472870070dad49c238cfa7d5525de31c"} Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.929237 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.937469 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-run\") pod \"bca9024a-e6a5-4824-8afa-6754d2151143\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.937521 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqr5j\" (UniqueName: \"kubernetes.io/projected/bca9024a-e6a5-4824-8afa-6754d2151143-kube-api-access-cqr5j\") pod \"bca9024a-e6a5-4824-8afa-6754d2151143\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.937549 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-sys\") pod \"bca9024a-e6a5-4824-8afa-6754d2151143\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.937575 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-etc-iscsi\") pod \"bca9024a-e6a5-4824-8afa-6754d2151143\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.937610 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-etc-nvme\") pod \"bca9024a-e6a5-4824-8afa-6754d2151143\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.937629 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-var-locks-brick\") pod \"bca9024a-e6a5-4824-8afa-6754d2151143\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.937655 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-var-locks-cinder\") pod \"bca9024a-e6a5-4824-8afa-6754d2151143\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.937685 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bca9024a-e6a5-4824-8afa-6754d2151143-scripts\") pod \"bca9024a-e6a5-4824-8afa-6754d2151143\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.937718 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-dev\") pod \"bca9024a-e6a5-4824-8afa-6754d2151143\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.937750 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bca9024a-e6a5-4824-8afa-6754d2151143-config-data-custom\") pod \"bca9024a-e6a5-4824-8afa-6754d2151143\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.937780 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-var-lib-cinder\") pod \"bca9024a-e6a5-4824-8afa-6754d2151143\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.937811 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-lib-modules\") pod \"bca9024a-e6a5-4824-8afa-6754d2151143\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.937835 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-etc-machine-id\") pod \"bca9024a-e6a5-4824-8afa-6754d2151143\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.937872 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca9024a-e6a5-4824-8afa-6754d2151143-config-data\") pod \"bca9024a-e6a5-4824-8afa-6754d2151143\" (UID: \"bca9024a-e6a5-4824-8afa-6754d2151143\") " Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.938072 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-run" (OuterVolumeSpecName: "run") pod "bca9024a-e6a5-4824-8afa-6754d2151143" (UID: "bca9024a-e6a5-4824-8afa-6754d2151143"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.938292 4869 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.938319 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-dev" (OuterVolumeSpecName: "dev") pod "bca9024a-e6a5-4824-8afa-6754d2151143" (UID: "bca9024a-e6a5-4824-8afa-6754d2151143"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.938341 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-sys" (OuterVolumeSpecName: "sys") pod "bca9024a-e6a5-4824-8afa-6754d2151143" (UID: "bca9024a-e6a5-4824-8afa-6754d2151143"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.938358 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "bca9024a-e6a5-4824-8afa-6754d2151143" (UID: "bca9024a-e6a5-4824-8afa-6754d2151143"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.938380 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "bca9024a-e6a5-4824-8afa-6754d2151143" (UID: "bca9024a-e6a5-4824-8afa-6754d2151143"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.938398 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "bca9024a-e6a5-4824-8afa-6754d2151143" (UID: "bca9024a-e6a5-4824-8afa-6754d2151143"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.938414 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "bca9024a-e6a5-4824-8afa-6754d2151143" (UID: "bca9024a-e6a5-4824-8afa-6754d2151143"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.939038 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bca9024a-e6a5-4824-8afa-6754d2151143" (UID: "bca9024a-e6a5-4824-8afa-6754d2151143"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.939080 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "bca9024a-e6a5-4824-8afa-6754d2151143" (UID: "bca9024a-e6a5-4824-8afa-6754d2151143"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.939105 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bca9024a-e6a5-4824-8afa-6754d2151143" (UID: "bca9024a-e6a5-4824-8afa-6754d2151143"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.943607 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bca9024a-e6a5-4824-8afa-6754d2151143-scripts" (OuterVolumeSpecName: "scripts") pod "bca9024a-e6a5-4824-8afa-6754d2151143" (UID: "bca9024a-e6a5-4824-8afa-6754d2151143"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.944098 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bca9024a-e6a5-4824-8afa-6754d2151143-kube-api-access-cqr5j" (OuterVolumeSpecName: "kube-api-access-cqr5j") pod "bca9024a-e6a5-4824-8afa-6754d2151143" (UID: "bca9024a-e6a5-4824-8afa-6754d2151143"). InnerVolumeSpecName "kube-api-access-cqr5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4869]: I0130 22:03:23.953516 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bca9024a-e6a5-4824-8afa-6754d2151143-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bca9024a-e6a5-4824-8afa-6754d2151143" (UID: "bca9024a-e6a5-4824-8afa-6754d2151143"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.016496 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bca9024a-e6a5-4824-8afa-6754d2151143-config-data" (OuterVolumeSpecName: "config-data") pod "bca9024a-e6a5-4824-8afa-6754d2151143" (UID: "bca9024a-e6a5-4824-8afa-6754d2151143"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.039575 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca9024a-e6a5-4824-8afa-6754d2151143-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.039611 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqr5j\" (UniqueName: \"kubernetes.io/projected/bca9024a-e6a5-4824-8afa-6754d2151143-kube-api-access-cqr5j\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.039621 4869 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-sys\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.039630 4869 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.039639 4869 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.039648 4869 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.039656 4869 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.039665 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bca9024a-e6a5-4824-8afa-6754d2151143-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.039673 4869 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-dev\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.039680 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bca9024a-e6a5-4824-8afa-6754d2151143-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.039688 4869 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.039696 4869 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.039704 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bca9024a-e6a5-4824-8afa-6754d2151143-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.198279 4869 generic.go:334] "Generic (PLEG): container finished" podID="bca9024a-e6a5-4824-8afa-6754d2151143" containerID="7fc018d87eacac2281244821cbed3a2d667d7be2b2db85ccb8193fc672e00b39" exitCode=0 Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.198327 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"bca9024a-e6a5-4824-8afa-6754d2151143","Type":"ContainerDied","Data":"7fc018d87eacac2281244821cbed3a2d667d7be2b2db85ccb8193fc672e00b39"} Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.198357 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"bca9024a-e6a5-4824-8afa-6754d2151143","Type":"ContainerDied","Data":"b635bcb303b8853e7219509be57af2d70803368a78f3d137edaf7599e15ab1e8"} Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.198360 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-2" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.198380 4869 scope.go:117] "RemoveContainer" containerID="55d68bf5f27721dd02131489eb5129ce472870070dad49c238cfa7d5525de31c" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.217676 4869 scope.go:117] "RemoveContainer" containerID="7fc018d87eacac2281244821cbed3a2d667d7be2b2db85ccb8193fc672e00b39" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.229765 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.240296 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.248805 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.249123 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-1" podUID="6ad6667e-bb64-479d-9ae9-17a4c167f3da" containerName="cinder-backup" containerID="cri-o://0ac13b130f05fb8d166e6d2cf29202267788a2ec4c62ce59dde8d62e7d4ae2be" gracePeriod=30 Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.249226 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-1" podUID="6ad6667e-bb64-479d-9ae9-17a4c167f3da" containerName="probe" containerID="cri-o://889a4e40ca0bf7d7fd2978369c2680e1dd8aade60101a42bc974ed58e3af9020" gracePeriod=30 Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.251240 4869 scope.go:117] "RemoveContainer" containerID="55d68bf5f27721dd02131489eb5129ce472870070dad49c238cfa7d5525de31c" Jan 30 22:03:24 crc kubenswrapper[4869]: E0130 22:03:24.253367 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55d68bf5f27721dd02131489eb5129ce472870070dad49c238cfa7d5525de31c\": container with ID starting with 55d68bf5f27721dd02131489eb5129ce472870070dad49c238cfa7d5525de31c not found: ID does not exist" containerID="55d68bf5f27721dd02131489eb5129ce472870070dad49c238cfa7d5525de31c" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.253440 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55d68bf5f27721dd02131489eb5129ce472870070dad49c238cfa7d5525de31c"} err="failed to get container status \"55d68bf5f27721dd02131489eb5129ce472870070dad49c238cfa7d5525de31c\": rpc error: code = NotFound desc = could not find container \"55d68bf5f27721dd02131489eb5129ce472870070dad49c238cfa7d5525de31c\": container with ID starting with 55d68bf5f27721dd02131489eb5129ce472870070dad49c238cfa7d5525de31c not found: ID does not exist" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.253494 4869 scope.go:117] "RemoveContainer" containerID="7fc018d87eacac2281244821cbed3a2d667d7be2b2db85ccb8193fc672e00b39" Jan 30 22:03:24 crc kubenswrapper[4869]: E0130 22:03:24.255972 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fc018d87eacac2281244821cbed3a2d667d7be2b2db85ccb8193fc672e00b39\": container with ID starting with 7fc018d87eacac2281244821cbed3a2d667d7be2b2db85ccb8193fc672e00b39 not found: ID does not exist" containerID="7fc018d87eacac2281244821cbed3a2d667d7be2b2db85ccb8193fc672e00b39" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.256010 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fc018d87eacac2281244821cbed3a2d667d7be2b2db85ccb8193fc672e00b39"} err="failed to get container status \"7fc018d87eacac2281244821cbed3a2d667d7be2b2db85ccb8193fc672e00b39\": rpc error: code = NotFound desc = could not find container \"7fc018d87eacac2281244821cbed3a2d667d7be2b2db85ccb8193fc672e00b39\": container with ID starting with 7fc018d87eacac2281244821cbed3a2d667d7be2b2db85ccb8193fc672e00b39 not found: ID does not exist" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.877643 4869 scope.go:117] "RemoveContainer" containerID="bae8ee2d2ea3ae8f98b57e2c4700d1cbe1214043f47147cae81508bdb3e4db03" Jan 30 22:03:24 crc kubenswrapper[4869]: I0130 22:03:24.877684 4869 scope.go:117] "RemoveContainer" containerID="f77f890fcdaf60c1c7106e84a9a3bffb31f68618c74dd384835bde05492c7422" Jan 30 22:03:24 crc kubenswrapper[4869]: E0130 22:03:24.878062 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" Jan 30 22:03:25 crc kubenswrapper[4869]: I0130 22:03:25.885571 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bca9024a-e6a5-4824-8afa-6754d2151143" path="/var/lib/kubelet/pods/bca9024a-e6a5-4824-8afa-6754d2151143/volumes" Jan 30 22:03:26 crc kubenswrapper[4869]: I0130 22:03:26.214003 4869 generic.go:334] "Generic (PLEG): container finished" podID="6ad6667e-bb64-479d-9ae9-17a4c167f3da" containerID="889a4e40ca0bf7d7fd2978369c2680e1dd8aade60101a42bc974ed58e3af9020" exitCode=0 Jan 30 22:03:26 crc kubenswrapper[4869]: I0130 22:03:26.214073 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"6ad6667e-bb64-479d-9ae9-17a4c167f3da","Type":"ContainerDied","Data":"889a4e40ca0bf7d7fd2978369c2680e1dd8aade60101a42bc974ed58e3af9020"} Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.005414 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042015 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-etc-machine-id\") pod \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042076 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-etc-iscsi\") pod \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042099 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-var-lib-cinder\") pod \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042128 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5x4x\" (UniqueName: \"kubernetes.io/projected/6ad6667e-bb64-479d-9ae9-17a4c167f3da-kube-api-access-h5x4x\") pod \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042154 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-var-locks-brick\") pod \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042149 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6ad6667e-bb64-479d-9ae9-17a4c167f3da" (UID: "6ad6667e-bb64-479d-9ae9-17a4c167f3da"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042174 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-etc-nvme\") pod \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042237 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "6ad6667e-bb64-479d-9ae9-17a4c167f3da" (UID: "6ad6667e-bb64-479d-9ae9-17a4c167f3da"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042278 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "6ad6667e-bb64-479d-9ae9-17a4c167f3da" (UID: "6ad6667e-bb64-479d-9ae9-17a4c167f3da"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042279 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ad6667e-bb64-479d-9ae9-17a4c167f3da-config-data-custom\") pod \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042323 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-sys\") pod \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042368 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-dev\") pod \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042406 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad6667e-bb64-479d-9ae9-17a4c167f3da-scripts\") pod \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042429 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-run\") pod \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042448 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-var-locks-cinder\") pod \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042477 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-lib-modules\") pod \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042502 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad6667e-bb64-479d-9ae9-17a4c167f3da-config-data\") pod \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\" (UID: \"6ad6667e-bb64-479d-9ae9-17a4c167f3da\") " Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042808 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042826 4869 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.042837 4869 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.043193 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "6ad6667e-bb64-479d-9ae9-17a4c167f3da" (UID: "6ad6667e-bb64-479d-9ae9-17a4c167f3da"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.043567 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-run" (OuterVolumeSpecName: "run") pod "6ad6667e-bb64-479d-9ae9-17a4c167f3da" (UID: "6ad6667e-bb64-479d-9ae9-17a4c167f3da"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.043618 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "6ad6667e-bb64-479d-9ae9-17a4c167f3da" (UID: "6ad6667e-bb64-479d-9ae9-17a4c167f3da"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.043647 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-sys" (OuterVolumeSpecName: "sys") pod "6ad6667e-bb64-479d-9ae9-17a4c167f3da" (UID: "6ad6667e-bb64-479d-9ae9-17a4c167f3da"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.043675 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-dev" (OuterVolumeSpecName: "dev") pod "6ad6667e-bb64-479d-9ae9-17a4c167f3da" (UID: "6ad6667e-bb64-479d-9ae9-17a4c167f3da"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.044015 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6ad6667e-bb64-479d-9ae9-17a4c167f3da" (UID: "6ad6667e-bb64-479d-9ae9-17a4c167f3da"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.044085 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "6ad6667e-bb64-479d-9ae9-17a4c167f3da" (UID: "6ad6667e-bb64-479d-9ae9-17a4c167f3da"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.055446 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad6667e-bb64-479d-9ae9-17a4c167f3da-scripts" (OuterVolumeSpecName: "scripts") pod "6ad6667e-bb64-479d-9ae9-17a4c167f3da" (UID: "6ad6667e-bb64-479d-9ae9-17a4c167f3da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.057182 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad6667e-bb64-479d-9ae9-17a4c167f3da-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6ad6667e-bb64-479d-9ae9-17a4c167f3da" (UID: "6ad6667e-bb64-479d-9ae9-17a4c167f3da"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.057782 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ad6667e-bb64-479d-9ae9-17a4c167f3da-kube-api-access-h5x4x" (OuterVolumeSpecName: "kube-api-access-h5x4x") pod "6ad6667e-bb64-479d-9ae9-17a4c167f3da" (UID: "6ad6667e-bb64-479d-9ae9-17a4c167f3da"). InnerVolumeSpecName "kube-api-access-h5x4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.118096 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad6667e-bb64-479d-9ae9-17a4c167f3da-config-data" (OuterVolumeSpecName: "config-data") pod "6ad6667e-bb64-479d-9ae9-17a4c167f3da" (UID: "6ad6667e-bb64-479d-9ae9-17a4c167f3da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.143683 4869 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.143720 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5x4x\" (UniqueName: \"kubernetes.io/projected/6ad6667e-bb64-479d-9ae9-17a4c167f3da-kube-api-access-h5x4x\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.143730 4869 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.143740 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ad6667e-bb64-479d-9ae9-17a4c167f3da-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.143749 4869 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-sys\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.143757 4869 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-dev\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.143766 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad6667e-bb64-479d-9ae9-17a4c167f3da-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.143774 4869 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.143781 4869 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.143789 4869 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ad6667e-bb64-479d-9ae9-17a4c167f3da-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.143798 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad6667e-bb64-479d-9ae9-17a4c167f3da-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.289928 4869 generic.go:334] "Generic (PLEG): container finished" podID="6ad6667e-bb64-479d-9ae9-17a4c167f3da" containerID="0ac13b130f05fb8d166e6d2cf29202267788a2ec4c62ce59dde8d62e7d4ae2be" exitCode=0 Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.289975 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"6ad6667e-bb64-479d-9ae9-17a4c167f3da","Type":"ContainerDied","Data":"0ac13b130f05fb8d166e6d2cf29202267788a2ec4c62ce59dde8d62e7d4ae2be"} Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.289996 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-1" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.290016 4869 scope.go:117] "RemoveContainer" containerID="889a4e40ca0bf7d7fd2978369c2680e1dd8aade60101a42bc974ed58e3af9020" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.290004 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"6ad6667e-bb64-479d-9ae9-17a4c167f3da","Type":"ContainerDied","Data":"5d62dfd444e65ae8a3dfc91c05880a6ee16e386fdf43bbcb24dd7a160ed99ada"} Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.306007 4869 scope.go:117] "RemoveContainer" containerID="0ac13b130f05fb8d166e6d2cf29202267788a2ec4c62ce59dde8d62e7d4ae2be" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.321035 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.327435 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.330380 4869 scope.go:117] "RemoveContainer" containerID="889a4e40ca0bf7d7fd2978369c2680e1dd8aade60101a42bc974ed58e3af9020" Jan 30 22:03:30 crc kubenswrapper[4869]: E0130 22:03:30.330791 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"889a4e40ca0bf7d7fd2978369c2680e1dd8aade60101a42bc974ed58e3af9020\": container with ID starting with 889a4e40ca0bf7d7fd2978369c2680e1dd8aade60101a42bc974ed58e3af9020 not found: ID does not exist" containerID="889a4e40ca0bf7d7fd2978369c2680e1dd8aade60101a42bc974ed58e3af9020" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.330938 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"889a4e40ca0bf7d7fd2978369c2680e1dd8aade60101a42bc974ed58e3af9020"} err="failed to get container status \"889a4e40ca0bf7d7fd2978369c2680e1dd8aade60101a42bc974ed58e3af9020\": rpc error: code = NotFound desc = could not find container \"889a4e40ca0bf7d7fd2978369c2680e1dd8aade60101a42bc974ed58e3af9020\": container with ID starting with 889a4e40ca0bf7d7fd2978369c2680e1dd8aade60101a42bc974ed58e3af9020 not found: ID does not exist" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.331023 4869 scope.go:117] "RemoveContainer" containerID="0ac13b130f05fb8d166e6d2cf29202267788a2ec4c62ce59dde8d62e7d4ae2be" Jan 30 22:03:30 crc kubenswrapper[4869]: E0130 22:03:30.331646 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ac13b130f05fb8d166e6d2cf29202267788a2ec4c62ce59dde8d62e7d4ae2be\": container with ID starting with 0ac13b130f05fb8d166e6d2cf29202267788a2ec4c62ce59dde8d62e7d4ae2be not found: ID does not exist" containerID="0ac13b130f05fb8d166e6d2cf29202267788a2ec4c62ce59dde8d62e7d4ae2be" Jan 30 22:03:30 crc kubenswrapper[4869]: I0130 22:03:30.331745 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ac13b130f05fb8d166e6d2cf29202267788a2ec4c62ce59dde8d62e7d4ae2be"} err="failed to get container status \"0ac13b130f05fb8d166e6d2cf29202267788a2ec4c62ce59dde8d62e7d4ae2be\": rpc error: code = NotFound desc = could not find container \"0ac13b130f05fb8d166e6d2cf29202267788a2ec4c62ce59dde8d62e7d4ae2be\": container with ID starting with 0ac13b130f05fb8d166e6d2cf29202267788a2ec4c62ce59dde8d62e7d4ae2be not found: ID does not exist" Jan 30 22:03:31 crc kubenswrapper[4869]: I0130 22:03:31.040999 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 30 22:03:31 crc kubenswrapper[4869]: I0130 22:03:31.041512 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="104613e6-ac18-41c4-8f61-abcdf0399885" containerName="cinder-api-log" containerID="cri-o://be7c1e2143e6211b03b0d771a3fe281a885c9d88c06c4df341c9a69e5807c672" gracePeriod=30 Jan 30 22:03:31 crc kubenswrapper[4869]: I0130 22:03:31.041618 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="104613e6-ac18-41c4-8f61-abcdf0399885" containerName="cinder-api" containerID="cri-o://63f5176c6505ed7c59249b1beaa617a3113094b20a46d78878081a42555f3fc2" gracePeriod=30 Jan 30 22:03:31 crc kubenswrapper[4869]: I0130 22:03:31.308951 4869 generic.go:334] "Generic (PLEG): container finished" podID="104613e6-ac18-41c4-8f61-abcdf0399885" containerID="be7c1e2143e6211b03b0d771a3fe281a885c9d88c06c4df341c9a69e5807c672" exitCode=143 Jan 30 22:03:31 crc kubenswrapper[4869]: I0130 22:03:31.309001 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"104613e6-ac18-41c4-8f61-abcdf0399885","Type":"ContainerDied","Data":"be7c1e2143e6211b03b0d771a3fe281a885c9d88c06c4df341c9a69e5807c672"} Jan 30 22:03:31 crc kubenswrapper[4869]: I0130 22:03:31.886405 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ad6667e-bb64-479d-9ae9-17a4c167f3da" path="/var/lib/kubelet/pods/6ad6667e-bb64-479d-9ae9-17a4c167f3da/volumes" Jan 30 22:03:31 crc kubenswrapper[4869]: I0130 22:03:31.990748 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:03:31 crc kubenswrapper[4869]: I0130 22:03:31.990818 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.175904 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-0" podUID="104613e6-ac18-41c4-8f61-abcdf0399885" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.94:8776/healthcheck\": read tcp 10.217.0.2:47090->10.217.0.94:8776: read: connection reset by peer" Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.329603 4869 generic.go:334] "Generic (PLEG): container finished" podID="104613e6-ac18-41c4-8f61-abcdf0399885" containerID="63f5176c6505ed7c59249b1beaa617a3113094b20a46d78878081a42555f3fc2" exitCode=0 Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.329657 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"104613e6-ac18-41c4-8f61-abcdf0399885","Type":"ContainerDied","Data":"63f5176c6505ed7c59249b1beaa617a3113094b20a46d78878081a42555f3fc2"} Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.556968 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.716318 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm5fd\" (UniqueName: \"kubernetes.io/projected/104613e6-ac18-41c4-8f61-abcdf0399885-kube-api-access-xm5fd\") pod \"104613e6-ac18-41c4-8f61-abcdf0399885\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.716458 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/104613e6-ac18-41c4-8f61-abcdf0399885-config-data\") pod \"104613e6-ac18-41c4-8f61-abcdf0399885\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.716506 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/104613e6-ac18-41c4-8f61-abcdf0399885-logs\") pod \"104613e6-ac18-41c4-8f61-abcdf0399885\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.716554 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/104613e6-ac18-41c4-8f61-abcdf0399885-etc-machine-id\") pod \"104613e6-ac18-41c4-8f61-abcdf0399885\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.716585 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/104613e6-ac18-41c4-8f61-abcdf0399885-config-data-custom\") pod \"104613e6-ac18-41c4-8f61-abcdf0399885\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.716626 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/104613e6-ac18-41c4-8f61-abcdf0399885-scripts\") pod \"104613e6-ac18-41c4-8f61-abcdf0399885\" (UID: \"104613e6-ac18-41c4-8f61-abcdf0399885\") " Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.717826 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/104613e6-ac18-41c4-8f61-abcdf0399885-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "104613e6-ac18-41c4-8f61-abcdf0399885" (UID: "104613e6-ac18-41c4-8f61-abcdf0399885"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.717963 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/104613e6-ac18-41c4-8f61-abcdf0399885-logs" (OuterVolumeSpecName: "logs") pod "104613e6-ac18-41c4-8f61-abcdf0399885" (UID: "104613e6-ac18-41c4-8f61-abcdf0399885"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.722261 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/104613e6-ac18-41c4-8f61-abcdf0399885-kube-api-access-xm5fd" (OuterVolumeSpecName: "kube-api-access-xm5fd") pod "104613e6-ac18-41c4-8f61-abcdf0399885" (UID: "104613e6-ac18-41c4-8f61-abcdf0399885"). InnerVolumeSpecName "kube-api-access-xm5fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.723651 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/104613e6-ac18-41c4-8f61-abcdf0399885-scripts" (OuterVolumeSpecName: "scripts") pod "104613e6-ac18-41c4-8f61-abcdf0399885" (UID: "104613e6-ac18-41c4-8f61-abcdf0399885"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.725431 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/104613e6-ac18-41c4-8f61-abcdf0399885-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "104613e6-ac18-41c4-8f61-abcdf0399885" (UID: "104613e6-ac18-41c4-8f61-abcdf0399885"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.753475 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/104613e6-ac18-41c4-8f61-abcdf0399885-config-data" (OuterVolumeSpecName: "config-data") pod "104613e6-ac18-41c4-8f61-abcdf0399885" (UID: "104613e6-ac18-41c4-8f61-abcdf0399885"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.818548 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/104613e6-ac18-41c4-8f61-abcdf0399885-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.818586 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/104613e6-ac18-41c4-8f61-abcdf0399885-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.818599 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/104613e6-ac18-41c4-8f61-abcdf0399885-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.818613 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm5fd\" (UniqueName: \"kubernetes.io/projected/104613e6-ac18-41c4-8f61-abcdf0399885-kube-api-access-xm5fd\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.818628 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/104613e6-ac18-41c4-8f61-abcdf0399885-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4869]: I0130 22:03:34.818640 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/104613e6-ac18-41c4-8f61-abcdf0399885-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:35 crc kubenswrapper[4869]: I0130 22:03:35.339194 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"104613e6-ac18-41c4-8f61-abcdf0399885","Type":"ContainerDied","Data":"b7539ad9d7f96fdc95f8dd714ae1d4fef6a10945ec1714dda7c4fe2c8cbaa1c3"} Jan 30 22:03:35 crc kubenswrapper[4869]: I0130 22:03:35.340958 4869 scope.go:117] "RemoveContainer" containerID="63f5176c6505ed7c59249b1beaa617a3113094b20a46d78878081a42555f3fc2" Jan 30 22:03:35 crc kubenswrapper[4869]: I0130 22:03:35.339261 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:35 crc kubenswrapper[4869]: I0130 22:03:35.370171 4869 scope.go:117] "RemoveContainer" containerID="be7c1e2143e6211b03b0d771a3fe281a885c9d88c06c4df341c9a69e5807c672" Jan 30 22:03:35 crc kubenswrapper[4869]: I0130 22:03:35.371101 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 30 22:03:35 crc kubenswrapper[4869]: I0130 22:03:35.386130 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 30 22:03:35 crc kubenswrapper[4869]: I0130 22:03:35.887333 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="104613e6-ac18-41c4-8f61-abcdf0399885" path="/var/lib/kubelet/pods/104613e6-ac18-41c4-8f61-abcdf0399885/volumes" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.288752 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 30 22:03:36 crc kubenswrapper[4869]: E0130 22:03:36.289039 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104613e6-ac18-41c4-8f61-abcdf0399885" containerName="cinder-api" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.289059 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="104613e6-ac18-41c4-8f61-abcdf0399885" containerName="cinder-api" Jan 30 22:03:36 crc kubenswrapper[4869]: E0130 22:03:36.289085 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca9024a-e6a5-4824-8afa-6754d2151143" containerName="cinder-backup" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.289094 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca9024a-e6a5-4824-8afa-6754d2151143" containerName="cinder-backup" Jan 30 22:03:36 crc kubenswrapper[4869]: E0130 22:03:36.289106 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ad6667e-bb64-479d-9ae9-17a4c167f3da" containerName="cinder-backup" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.289113 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ad6667e-bb64-479d-9ae9-17a4c167f3da" containerName="cinder-backup" Jan 30 22:03:36 crc kubenswrapper[4869]: E0130 22:03:36.289125 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104613e6-ac18-41c4-8f61-abcdf0399885" containerName="cinder-api-log" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.289131 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="104613e6-ac18-41c4-8f61-abcdf0399885" containerName="cinder-api-log" Jan 30 22:03:36 crc kubenswrapper[4869]: E0130 22:03:36.289142 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ad6667e-bb64-479d-9ae9-17a4c167f3da" containerName="probe" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.289148 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ad6667e-bb64-479d-9ae9-17a4c167f3da" containerName="probe" Jan 30 22:03:36 crc kubenswrapper[4869]: E0130 22:03:36.289163 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca9024a-e6a5-4824-8afa-6754d2151143" containerName="probe" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.289169 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca9024a-e6a5-4824-8afa-6754d2151143" containerName="probe" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.289282 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="104613e6-ac18-41c4-8f61-abcdf0399885" containerName="cinder-api-log" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.289301 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ad6667e-bb64-479d-9ae9-17a4c167f3da" containerName="cinder-backup" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.289312 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="104613e6-ac18-41c4-8f61-abcdf0399885" containerName="cinder-api" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.289321 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ad6667e-bb64-479d-9ae9-17a4c167f3da" containerName="probe" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.289333 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca9024a-e6a5-4824-8afa-6754d2151143" containerName="probe" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.289344 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca9024a-e6a5-4824-8afa-6754d2151143" containerName="cinder-backup" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.290003 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:36 crc kubenswrapper[4869]: W0130 22:03:36.291730 4869 reflector.go:561] object-"cinder-kuttl-tests"/"cinder-api-config-data": failed to list *v1.Secret: secrets "cinder-api-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "cinder-kuttl-tests": no relationship found between node 'crc' and this object Jan 30 22:03:36 crc kubenswrapper[4869]: E0130 22:03:36.291785 4869 reflector.go:158] "Unhandled Error" err="object-\"cinder-kuttl-tests\"/\"cinder-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cinder-api-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"cinder-kuttl-tests\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.298112 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.299477 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.303587 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.304879 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.308769 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.319760 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.325002 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.340617 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-config-data-custom\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.340661 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h49k\" (UniqueName: \"kubernetes.io/projected/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-kube-api-access-8h49k\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.340691 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-config-data\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.340712 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg92z\" (UniqueName: \"kubernetes.io/projected/a05330df-ce32-4e24-9c83-80d3d6851fe2-kube-api-access-gg92z\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.340757 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-config-data-custom\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.340789 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4588829-9e62-46b7-8de8-949d869d21b5-logs\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.340809 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a05330df-ce32-4e24-9c83-80d3d6851fe2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.340827 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-scripts\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.340853 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-config-data-custom\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.340878 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-etc-machine-id\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.340919 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-scripts\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.341061 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-config-data\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.341098 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-config-data\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.341119 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-scripts\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.343140 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a05330df-ce32-4e24-9c83-80d3d6851fe2-logs\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.343205 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4588829-9e62-46b7-8de8-949d869d21b5-etc-machine-id\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.343231 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-logs\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.343265 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbqcj\" (UniqueName: \"kubernetes.io/projected/d4588829-9e62-46b7-8de8-949d869d21b5-kube-api-access-fbqcj\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.444463 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4588829-9e62-46b7-8de8-949d869d21b5-logs\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.444509 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a05330df-ce32-4e24-9c83-80d3d6851fe2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.444539 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-scripts\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.444575 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-config-data-custom\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.444598 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-etc-machine-id\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.444621 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-scripts\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.444624 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a05330df-ce32-4e24-9c83-80d3d6851fe2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.444651 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-config-data\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.444708 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-config-data\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.444738 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-scripts\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.444758 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a05330df-ce32-4e24-9c83-80d3d6851fe2-logs\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.444789 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4588829-9e62-46b7-8de8-949d869d21b5-etc-machine-id\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.444816 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-logs\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.444845 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbqcj\" (UniqueName: \"kubernetes.io/projected/d4588829-9e62-46b7-8de8-949d869d21b5-kube-api-access-fbqcj\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.444882 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-config-data-custom\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.444922 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h49k\" (UniqueName: \"kubernetes.io/projected/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-kube-api-access-8h49k\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.444949 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-config-data\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.444976 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg92z\" (UniqueName: \"kubernetes.io/projected/a05330df-ce32-4e24-9c83-80d3d6851fe2-kube-api-access-gg92z\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.445034 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-config-data-custom\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.445119 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4588829-9e62-46b7-8de8-949d869d21b5-logs\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.445166 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4588829-9e62-46b7-8de8-949d869d21b5-etc-machine-id\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.445244 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a05330df-ce32-4e24-9c83-80d3d6851fe2-logs\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.445488 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-logs\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.446067 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-etc-machine-id\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.450155 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-scripts\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.450345 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-config-data\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.450589 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-scripts\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.452989 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-config-data\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.459524 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-scripts\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.462372 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h49k\" (UniqueName: \"kubernetes.io/projected/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-kube-api-access-8h49k\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.463040 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg92z\" (UniqueName: \"kubernetes.io/projected/a05330df-ce32-4e24-9c83-80d3d6851fe2-kube-api-access-gg92z\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.464448 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-config-data\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:36 crc kubenswrapper[4869]: I0130 22:03:36.467458 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbqcj\" (UniqueName: \"kubernetes.io/projected/d4588829-9e62-46b7-8de8-949d869d21b5-kube-api-access-fbqcj\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:37 crc kubenswrapper[4869]: E0130 22:03:37.446048 4869 secret.go:188] Couldn't get secret cinder-kuttl-tests/cinder-api-config-data: failed to sync secret cache: timed out waiting for the condition Jan 30 22:03:37 crc kubenswrapper[4869]: E0130 22:03:37.446102 4869 secret.go:188] Couldn't get secret cinder-kuttl-tests/cinder-api-config-data: failed to sync secret cache: timed out waiting for the condition Jan 30 22:03:37 crc kubenswrapper[4869]: E0130 22:03:37.446131 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-config-data-custom podName:a05330df-ce32-4e24-9c83-80d3d6851fe2 nodeName:}" failed. No retries permitted until 2026-01-30 22:03:37.946110244 +0000 UTC m=+1218.831868269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-config-data-custom") pod "cinder-api-0" (UID: "a05330df-ce32-4e24-9c83-80d3d6851fe2") : failed to sync secret cache: timed out waiting for the condition Jan 30 22:03:37 crc kubenswrapper[4869]: E0130 22:03:37.446186 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-config-data-custom podName:4bb8bcda-df1e-447e-a871-f993d3f8f0fe nodeName:}" failed. No retries permitted until 2026-01-30 22:03:37.946163286 +0000 UTC m=+1218.831921311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-config-data-custom") pod "cinder-api-2" (UID: "4bb8bcda-df1e-447e-a871-f993d3f8f0fe") : failed to sync secret cache: timed out waiting for the condition Jan 30 22:03:37 crc kubenswrapper[4869]: E0130 22:03:37.446215 4869 secret.go:188] Couldn't get secret cinder-kuttl-tests/cinder-api-config-data: failed to sync secret cache: timed out waiting for the condition Jan 30 22:03:37 crc kubenswrapper[4869]: E0130 22:03:37.446238 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-config-data-custom podName:d4588829-9e62-46b7-8de8-949d869d21b5 nodeName:}" failed. No retries permitted until 2026-01-30 22:03:37.946232348 +0000 UTC m=+1218.831990373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-config-data-custom") pod "cinder-api-1" (UID: "d4588829-9e62-46b7-8de8-949d869d21b5") : failed to sync secret cache: timed out waiting for the condition Jan 30 22:03:37 crc kubenswrapper[4869]: I0130 22:03:37.585158 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-api-config-data" Jan 30 22:03:37 crc kubenswrapper[4869]: I0130 22:03:37.963760 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-config-data-custom\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:37 crc kubenswrapper[4869]: I0130 22:03:37.964171 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-config-data-custom\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:37 crc kubenswrapper[4869]: I0130 22:03:37.964255 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-config-data-custom\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:37 crc kubenswrapper[4869]: I0130 22:03:37.968432 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-config-data-custom\") pod \"cinder-api-0\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:37 crc kubenswrapper[4869]: I0130 22:03:37.968455 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-config-data-custom\") pod \"cinder-api-2\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:37 crc kubenswrapper[4869]: I0130 22:03:37.969123 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-config-data-custom\") pod \"cinder-api-1\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:38 crc kubenswrapper[4869]: I0130 22:03:38.104775 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:38 crc kubenswrapper[4869]: I0130 22:03:38.123331 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:38 crc kubenswrapper[4869]: I0130 22:03:38.130603 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:38 crc kubenswrapper[4869]: I0130 22:03:38.331551 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 30 22:03:38 crc kubenswrapper[4869]: W0130 22:03:38.341162 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda05330df_ce32_4e24_9c83_80d3d6851fe2.slice/crio-13005a6de2acb656d412ffcc3e9e063f8a295b3305b3b75b027b3a04b7529760 WatchSource:0}: Error finding container 13005a6de2acb656d412ffcc3e9e063f8a295b3305b3b75b027b3a04b7529760: Status 404 returned error can't find the container with id 13005a6de2acb656d412ffcc3e9e063f8a295b3305b3b75b027b3a04b7529760 Jan 30 22:03:38 crc kubenswrapper[4869]: I0130 22:03:38.393063 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"a05330df-ce32-4e24-9c83-80d3d6851fe2","Type":"ContainerStarted","Data":"13005a6de2acb656d412ffcc3e9e063f8a295b3305b3b75b027b3a04b7529760"} Jan 30 22:03:38 crc kubenswrapper[4869]: I0130 22:03:38.586926 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 30 22:03:38 crc kubenswrapper[4869]: I0130 22:03:38.594168 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 30 22:03:38 crc kubenswrapper[4869]: W0130 22:03:38.599864 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bb8bcda_df1e_447e_a871_f993d3f8f0fe.slice/crio-1df3586abb6c7c0c26c956fbea9ee0acd9f9687b73ff0be2b6c3591eda0dd1f7 WatchSource:0}: Error finding container 1df3586abb6c7c0c26c956fbea9ee0acd9f9687b73ff0be2b6c3591eda0dd1f7: Status 404 returned error can't find the container with id 1df3586abb6c7c0c26c956fbea9ee0acd9f9687b73ff0be2b6c3591eda0dd1f7 Jan 30 22:03:38 crc kubenswrapper[4869]: I0130 22:03:38.877713 4869 scope.go:117] "RemoveContainer" containerID="bae8ee2d2ea3ae8f98b57e2c4700d1cbe1214043f47147cae81508bdb3e4db03" Jan 30 22:03:38 crc kubenswrapper[4869]: I0130 22:03:38.877748 4869 scope.go:117] "RemoveContainer" containerID="f77f890fcdaf60c1c7106e84a9a3bffb31f68618c74dd384835bde05492c7422" Jan 30 22:03:38 crc kubenswrapper[4869]: E0130 22:03:38.877970 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" Jan 30 22:03:39 crc kubenswrapper[4869]: I0130 22:03:39.405662 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"a05330df-ce32-4e24-9c83-80d3d6851fe2","Type":"ContainerStarted","Data":"0a4660d9b7201024ad830649e6f62adaf974d3b510cee5cac6e343b871616828"} Jan 30 22:03:39 crc kubenswrapper[4869]: I0130 22:03:39.406014 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"a05330df-ce32-4e24-9c83-80d3d6851fe2","Type":"ContainerStarted","Data":"2bb15315f2b3aab17d241c4b95c77aad9853fb4456d4eb4d2bf7998cb1d88f78"} Jan 30 22:03:39 crc kubenswrapper[4869]: I0130 22:03:39.407338 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"d4588829-9e62-46b7-8de8-949d869d21b5","Type":"ContainerStarted","Data":"b9f8c253d0bda51a604e111734b584c3e5383408d0329df2d0f6431c4965ff71"} Jan 30 22:03:39 crc kubenswrapper[4869]: I0130 22:03:39.407364 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"d4588829-9e62-46b7-8de8-949d869d21b5","Type":"ContainerStarted","Data":"08c99aedd594ac9e38f0e73de3abe53540a4ef1a5077f18300c044bb181d5fe1"} Jan 30 22:03:39 crc kubenswrapper[4869]: I0130 22:03:39.408285 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"4bb8bcda-df1e-447e-a871-f993d3f8f0fe","Type":"ContainerStarted","Data":"af5740313aa4ab5857e0e9e90c258158a4df2e127bcd0f83b16eb7ae9efb7cd9"} Jan 30 22:03:39 crc kubenswrapper[4869]: I0130 22:03:39.408309 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"4bb8bcda-df1e-447e-a871-f993d3f8f0fe","Type":"ContainerStarted","Data":"1df3586abb6c7c0c26c956fbea9ee0acd9f9687b73ff0be2b6c3591eda0dd1f7"} Jan 30 22:03:40 crc kubenswrapper[4869]: I0130 22:03:40.419649 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"4bb8bcda-df1e-447e-a871-f993d3f8f0fe","Type":"ContainerStarted","Data":"365f9174d1a4678299bc9019df0ab009188295ff37e08205b08124b4e613def1"} Jan 30 22:03:40 crc kubenswrapper[4869]: I0130 22:03:40.420476 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:40 crc kubenswrapper[4869]: I0130 22:03:40.424170 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"d4588829-9e62-46b7-8de8-949d869d21b5","Type":"ContainerStarted","Data":"1703043fc7ec030293f446e1c9a44298728d10149e4eca5f10abe6b6ed5dfb95"} Jan 30 22:03:40 crc kubenswrapper[4869]: I0130 22:03:40.424233 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:40 crc kubenswrapper[4869]: I0130 22:03:40.424245 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:40 crc kubenswrapper[4869]: I0130 22:03:40.446519 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-2" podStartSLOduration=4.446498125 podStartE2EDuration="4.446498125s" podCreationTimestamp="2026-01-30 22:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:40.439765635 +0000 UTC m=+1221.325523660" watchObservedRunningTime="2026-01-30 22:03:40.446498125 +0000 UTC m=+1221.332256150" Jan 30 22:03:40 crc kubenswrapper[4869]: I0130 22:03:40.457633 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-0" podStartSLOduration=4.457589684 podStartE2EDuration="4.457589684s" podCreationTimestamp="2026-01-30 22:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:40.456674525 +0000 UTC m=+1221.342432540" watchObservedRunningTime="2026-01-30 22:03:40.457589684 +0000 UTC m=+1221.343347709" Jan 30 22:03:40 crc kubenswrapper[4869]: I0130 22:03:40.487268 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-1" podStartSLOduration=4.487244276 podStartE2EDuration="4.487244276s" podCreationTimestamp="2026-01-30 22:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:40.4765837 +0000 UTC m=+1221.362341735" watchObservedRunningTime="2026-01-30 22:03:40.487244276 +0000 UTC m=+1221.373002301" Jan 30 22:03:50 crc kubenswrapper[4869]: I0130 22:03:50.075212 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:50 crc kubenswrapper[4869]: I0130 22:03:50.201928 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:50 crc kubenswrapper[4869]: I0130 22:03:50.244303 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:03:50 crc kubenswrapper[4869]: I0130 22:03:50.674832 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 30 22:03:50 crc kubenswrapper[4869]: I0130 22:03:50.675074 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-2" podUID="4bb8bcda-df1e-447e-a871-f993d3f8f0fe" containerName="cinder-api-log" containerID="cri-o://af5740313aa4ab5857e0e9e90c258158a4df2e127bcd0f83b16eb7ae9efb7cd9" gracePeriod=30 Jan 30 22:03:50 crc kubenswrapper[4869]: I0130 22:03:50.675189 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-2" podUID="4bb8bcda-df1e-447e-a871-f993d3f8f0fe" containerName="cinder-api" containerID="cri-o://365f9174d1a4678299bc9019df0ab009188295ff37e08205b08124b4e613def1" gracePeriod=30 Jan 30 22:03:50 crc kubenswrapper[4869]: I0130 22:03:50.683364 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="cinder-kuttl-tests/cinder-api-2" podUID="4bb8bcda-df1e-447e-a871-f993d3f8f0fe" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.100:8776/healthcheck\": EOF" Jan 30 22:03:50 crc kubenswrapper[4869]: I0130 22:03:50.685141 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 30 22:03:50 crc kubenswrapper[4869]: I0130 22:03:50.685426 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-1" podUID="d4588829-9e62-46b7-8de8-949d869d21b5" containerName="cinder-api-log" containerID="cri-o://b9f8c253d0bda51a604e111734b584c3e5383408d0329df2d0f6431c4965ff71" gracePeriod=30 Jan 30 22:03:50 crc kubenswrapper[4869]: I0130 22:03:50.685596 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-1" podUID="d4588829-9e62-46b7-8de8-949d869d21b5" containerName="cinder-api" containerID="cri-o://1703043fc7ec030293f446e1c9a44298728d10149e4eca5f10abe6b6ed5dfb95" gracePeriod=30 Jan 30 22:03:50 crc kubenswrapper[4869]: I0130 22:03:50.697271 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="cinder-kuttl-tests/cinder-api-1" podUID="d4588829-9e62-46b7-8de8-949d869d21b5" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.101:8776/healthcheck\": EOF" Jan 30 22:03:51 crc kubenswrapper[4869]: I0130 22:03:51.512032 4869 generic.go:334] "Generic (PLEG): container finished" podID="4bb8bcda-df1e-447e-a871-f993d3f8f0fe" containerID="af5740313aa4ab5857e0e9e90c258158a4df2e127bcd0f83b16eb7ae9efb7cd9" exitCode=143 Jan 30 22:03:51 crc kubenswrapper[4869]: I0130 22:03:51.512396 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"4bb8bcda-df1e-447e-a871-f993d3f8f0fe","Type":"ContainerDied","Data":"af5740313aa4ab5857e0e9e90c258158a4df2e127bcd0f83b16eb7ae9efb7cd9"} Jan 30 22:03:51 crc kubenswrapper[4869]: I0130 22:03:51.514828 4869 generic.go:334] "Generic (PLEG): container finished" podID="d4588829-9e62-46b7-8de8-949d869d21b5" containerID="b9f8c253d0bda51a604e111734b584c3e5383408d0329df2d0f6431c4965ff71" exitCode=143 Jan 30 22:03:51 crc kubenswrapper[4869]: I0130 22:03:51.514872 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"d4588829-9e62-46b7-8de8-949d869d21b5","Type":"ContainerDied","Data":"b9f8c253d0bda51a604e111734b584c3e5383408d0329df2d0f6431c4965ff71"} Jan 30 22:03:53 crc kubenswrapper[4869]: I0130 22:03:53.877726 4869 scope.go:117] "RemoveContainer" containerID="bae8ee2d2ea3ae8f98b57e2c4700d1cbe1214043f47147cae81508bdb3e4db03" Jan 30 22:03:53 crc kubenswrapper[4869]: I0130 22:03:53.878042 4869 scope.go:117] "RemoveContainer" containerID="f77f890fcdaf60c1c7106e84a9a3bffb31f68618c74dd384835bde05492c7422" Jan 30 22:03:54 crc kubenswrapper[4869]: I0130 22:03:54.541241 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerStarted","Data":"df319e63bcc19246119a7f2928b40c21d1f71f40baed31a978bfe2418e5b04f5"} Jan 30 22:03:54 crc kubenswrapper[4869]: I0130 22:03:54.541824 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerStarted","Data":"40947c8f4f0eb66d7fe010d4b2272c01a3d3549cf4abab2b2e7adc0f3e05ca51"} Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.088979 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-1" podUID="d4588829-9e62-46b7-8de8-949d869d21b5" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.101:8776/healthcheck\": read tcp 10.217.0.2:57686->10.217.0.101:8776: read: connection reset by peer" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.112136 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-2" podUID="4bb8bcda-df1e-447e-a871-f993d3f8f0fe" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.100:8776/healthcheck\": read tcp 10.217.0.2:47148->10.217.0.100:8776: read: connection reset by peer" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.522852 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.527428 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.582231 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-config-data-custom\") pod \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.582286 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4588829-9e62-46b7-8de8-949d869d21b5-etc-machine-id\") pod \"d4588829-9e62-46b7-8de8-949d869d21b5\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.582333 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4588829-9e62-46b7-8de8-949d869d21b5-logs\") pod \"d4588829-9e62-46b7-8de8-949d869d21b5\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.582377 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-scripts\") pod \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.582405 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-etc-machine-id\") pod \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.582457 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h49k\" (UniqueName: \"kubernetes.io/projected/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-kube-api-access-8h49k\") pod \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.582490 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-config-data\") pod \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.582515 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-config-data-custom\") pod \"d4588829-9e62-46b7-8de8-949d869d21b5\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.582543 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-config-data\") pod \"d4588829-9e62-46b7-8de8-949d869d21b5\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.582569 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbqcj\" (UniqueName: \"kubernetes.io/projected/d4588829-9e62-46b7-8de8-949d869d21b5-kube-api-access-fbqcj\") pod \"d4588829-9e62-46b7-8de8-949d869d21b5\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.582620 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-logs\") pod \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\" (UID: \"4bb8bcda-df1e-447e-a871-f993d3f8f0fe\") " Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.582652 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-scripts\") pod \"d4588829-9e62-46b7-8de8-949d869d21b5\" (UID: \"d4588829-9e62-46b7-8de8-949d869d21b5\") " Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.583710 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4588829-9e62-46b7-8de8-949d869d21b5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d4588829-9e62-46b7-8de8-949d869d21b5" (UID: "d4588829-9e62-46b7-8de8-949d869d21b5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.584138 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4588829-9e62-46b7-8de8-949d869d21b5-logs" (OuterVolumeSpecName: "logs") pod "d4588829-9e62-46b7-8de8-949d869d21b5" (UID: "d4588829-9e62-46b7-8de8-949d869d21b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.585396 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4bb8bcda-df1e-447e-a871-f993d3f8f0fe" (UID: "4bb8bcda-df1e-447e-a871-f993d3f8f0fe"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.586156 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-logs" (OuterVolumeSpecName: "logs") pod "4bb8bcda-df1e-447e-a871-f993d3f8f0fe" (UID: "4bb8bcda-df1e-447e-a871-f993d3f8f0fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.592491 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d4588829-9e62-46b7-8de8-949d869d21b5" (UID: "d4588829-9e62-46b7-8de8-949d869d21b5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.592532 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4588829-9e62-46b7-8de8-949d869d21b5-kube-api-access-fbqcj" (OuterVolumeSpecName: "kube-api-access-fbqcj") pod "d4588829-9e62-46b7-8de8-949d869d21b5" (UID: "d4588829-9e62-46b7-8de8-949d869d21b5"). InnerVolumeSpecName "kube-api-access-fbqcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.592789 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-kube-api-access-8h49k" (OuterVolumeSpecName: "kube-api-access-8h49k") pod "4bb8bcda-df1e-447e-a871-f993d3f8f0fe" (UID: "4bb8bcda-df1e-447e-a871-f993d3f8f0fe"). InnerVolumeSpecName "kube-api-access-8h49k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.592879 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4bb8bcda-df1e-447e-a871-f993d3f8f0fe" (UID: "4bb8bcda-df1e-447e-a871-f993d3f8f0fe"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.593318 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-scripts" (OuterVolumeSpecName: "scripts") pod "d4588829-9e62-46b7-8de8-949d869d21b5" (UID: "d4588829-9e62-46b7-8de8-949d869d21b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.593765 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-scripts" (OuterVolumeSpecName: "scripts") pod "4bb8bcda-df1e-447e-a871-f993d3f8f0fe" (UID: "4bb8bcda-df1e-447e-a871-f993d3f8f0fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.597303 4869 generic.go:334] "Generic (PLEG): container finished" podID="4bb8bcda-df1e-447e-a871-f993d3f8f0fe" containerID="365f9174d1a4678299bc9019df0ab009188295ff37e08205b08124b4e613def1" exitCode=0 Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.597590 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"4bb8bcda-df1e-447e-a871-f993d3f8f0fe","Type":"ContainerDied","Data":"365f9174d1a4678299bc9019df0ab009188295ff37e08205b08124b4e613def1"} Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.597626 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"4bb8bcda-df1e-447e-a871-f993d3f8f0fe","Type":"ContainerDied","Data":"1df3586abb6c7c0c26c956fbea9ee0acd9f9687b73ff0be2b6c3591eda0dd1f7"} Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.597648 4869 scope.go:117] "RemoveContainer" containerID="365f9174d1a4678299bc9019df0ab009188295ff37e08205b08124b4e613def1" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.597839 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-2" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.628678 4869 generic.go:334] "Generic (PLEG): container finished" podID="d4588829-9e62-46b7-8de8-949d869d21b5" containerID="1703043fc7ec030293f446e1c9a44298728d10149e4eca5f10abe6b6ed5dfb95" exitCode=0 Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.629256 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-1" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.632809 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"d4588829-9e62-46b7-8de8-949d869d21b5","Type":"ContainerDied","Data":"1703043fc7ec030293f446e1c9a44298728d10149e4eca5f10abe6b6ed5dfb95"} Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.632989 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"d4588829-9e62-46b7-8de8-949d869d21b5","Type":"ContainerDied","Data":"08c99aedd594ac9e38f0e73de3abe53540a4ef1a5077f18300c044bb181d5fe1"} Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.637647 4869 generic.go:334] "Generic (PLEG): container finished" podID="a748ea99-7369-48cb-8983-9f41ff077f82" containerID="df319e63bcc19246119a7f2928b40c21d1f71f40baed31a978bfe2418e5b04f5" exitCode=1 Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.637699 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerDied","Data":"df319e63bcc19246119a7f2928b40c21d1f71f40baed31a978bfe2418e5b04f5"} Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.638441 4869 scope.go:117] "RemoveContainer" containerID="df319e63bcc19246119a7f2928b40c21d1f71f40baed31a978bfe2418e5b04f5" Jan 30 22:03:56 crc kubenswrapper[4869]: E0130 22:03:56.639115 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\"" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.641315 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-config-data" (OuterVolumeSpecName: "config-data") pod "4bb8bcda-df1e-447e-a871-f993d3f8f0fe" (UID: "4bb8bcda-df1e-447e-a871-f993d3f8f0fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.658277 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-config-data" (OuterVolumeSpecName: "config-data") pod "d4588829-9e62-46b7-8de8-949d869d21b5" (UID: "d4588829-9e62-46b7-8de8-949d869d21b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.673235 4869 scope.go:117] "RemoveContainer" containerID="af5740313aa4ab5857e0e9e90c258158a4df2e127bcd0f83b16eb7ae9efb7cd9" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.684811 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4588829-9e62-46b7-8de8-949d869d21b5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.684840 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.684850 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.684859 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h49k\" (UniqueName: \"kubernetes.io/projected/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-kube-api-access-8h49k\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.684869 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.684878 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.684887 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.684910 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbqcj\" (UniqueName: \"kubernetes.io/projected/d4588829-9e62-46b7-8de8-949d869d21b5-kube-api-access-fbqcj\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.684921 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.684930 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4588829-9e62-46b7-8de8-949d869d21b5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.684938 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bb8bcda-df1e-447e-a871-f993d3f8f0fe-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.684947 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4588829-9e62-46b7-8de8-949d869d21b5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.692583 4869 scope.go:117] "RemoveContainer" containerID="365f9174d1a4678299bc9019df0ab009188295ff37e08205b08124b4e613def1" Jan 30 22:03:56 crc kubenswrapper[4869]: E0130 22:03:56.693087 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"365f9174d1a4678299bc9019df0ab009188295ff37e08205b08124b4e613def1\": container with ID starting with 365f9174d1a4678299bc9019df0ab009188295ff37e08205b08124b4e613def1 not found: ID does not exist" containerID="365f9174d1a4678299bc9019df0ab009188295ff37e08205b08124b4e613def1" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.693137 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"365f9174d1a4678299bc9019df0ab009188295ff37e08205b08124b4e613def1"} err="failed to get container status \"365f9174d1a4678299bc9019df0ab009188295ff37e08205b08124b4e613def1\": rpc error: code = NotFound desc = could not find container \"365f9174d1a4678299bc9019df0ab009188295ff37e08205b08124b4e613def1\": container with ID starting with 365f9174d1a4678299bc9019df0ab009188295ff37e08205b08124b4e613def1 not found: ID does not exist" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.693174 4869 scope.go:117] "RemoveContainer" containerID="af5740313aa4ab5857e0e9e90c258158a4df2e127bcd0f83b16eb7ae9efb7cd9" Jan 30 22:03:56 crc kubenswrapper[4869]: E0130 22:03:56.693461 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af5740313aa4ab5857e0e9e90c258158a4df2e127bcd0f83b16eb7ae9efb7cd9\": container with ID starting with af5740313aa4ab5857e0e9e90c258158a4df2e127bcd0f83b16eb7ae9efb7cd9 not found: ID does not exist" containerID="af5740313aa4ab5857e0e9e90c258158a4df2e127bcd0f83b16eb7ae9efb7cd9" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.693484 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af5740313aa4ab5857e0e9e90c258158a4df2e127bcd0f83b16eb7ae9efb7cd9"} err="failed to get container status \"af5740313aa4ab5857e0e9e90c258158a4df2e127bcd0f83b16eb7ae9efb7cd9\": rpc error: code = NotFound desc = could not find container \"af5740313aa4ab5857e0e9e90c258158a4df2e127bcd0f83b16eb7ae9efb7cd9\": container with ID starting with af5740313aa4ab5857e0e9e90c258158a4df2e127bcd0f83b16eb7ae9efb7cd9 not found: ID does not exist" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.693498 4869 scope.go:117] "RemoveContainer" containerID="1703043fc7ec030293f446e1c9a44298728d10149e4eca5f10abe6b6ed5dfb95" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.713973 4869 scope.go:117] "RemoveContainer" containerID="b9f8c253d0bda51a604e111734b584c3e5383408d0329df2d0f6431c4965ff71" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.727283 4869 scope.go:117] "RemoveContainer" containerID="1703043fc7ec030293f446e1c9a44298728d10149e4eca5f10abe6b6ed5dfb95" Jan 30 22:03:56 crc kubenswrapper[4869]: E0130 22:03:56.727689 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1703043fc7ec030293f446e1c9a44298728d10149e4eca5f10abe6b6ed5dfb95\": container with ID starting with 1703043fc7ec030293f446e1c9a44298728d10149e4eca5f10abe6b6ed5dfb95 not found: ID does not exist" containerID="1703043fc7ec030293f446e1c9a44298728d10149e4eca5f10abe6b6ed5dfb95" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.727731 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1703043fc7ec030293f446e1c9a44298728d10149e4eca5f10abe6b6ed5dfb95"} err="failed to get container status \"1703043fc7ec030293f446e1c9a44298728d10149e4eca5f10abe6b6ed5dfb95\": rpc error: code = NotFound desc = could not find container \"1703043fc7ec030293f446e1c9a44298728d10149e4eca5f10abe6b6ed5dfb95\": container with ID starting with 1703043fc7ec030293f446e1c9a44298728d10149e4eca5f10abe6b6ed5dfb95 not found: ID does not exist" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.727761 4869 scope.go:117] "RemoveContainer" containerID="b9f8c253d0bda51a604e111734b584c3e5383408d0329df2d0f6431c4965ff71" Jan 30 22:03:56 crc kubenswrapper[4869]: E0130 22:03:56.729817 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9f8c253d0bda51a604e111734b584c3e5383408d0329df2d0f6431c4965ff71\": container with ID starting with b9f8c253d0bda51a604e111734b584c3e5383408d0329df2d0f6431c4965ff71 not found: ID does not exist" containerID="b9f8c253d0bda51a604e111734b584c3e5383408d0329df2d0f6431c4965ff71" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.729849 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f8c253d0bda51a604e111734b584c3e5383408d0329df2d0f6431c4965ff71"} err="failed to get container status \"b9f8c253d0bda51a604e111734b584c3e5383408d0329df2d0f6431c4965ff71\": rpc error: code = NotFound desc = could not find container \"b9f8c253d0bda51a604e111734b584c3e5383408d0329df2d0f6431c4965ff71\": container with ID starting with b9f8c253d0bda51a604e111734b584c3e5383408d0329df2d0f6431c4965ff71 not found: ID does not exist" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.729870 4869 scope.go:117] "RemoveContainer" containerID="f77f890fcdaf60c1c7106e84a9a3bffb31f68618c74dd384835bde05492c7422" Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.925300 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.931623 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.965373 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 30 22:03:56 crc kubenswrapper[4869]: I0130 22:03:56.976469 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 30 22:03:57 crc kubenswrapper[4869]: I0130 22:03:57.193502 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:03:57 crc kubenswrapper[4869]: I0130 22:03:57.647613 4869 generic.go:334] "Generic (PLEG): container finished" podID="a748ea99-7369-48cb-8983-9f41ff077f82" containerID="40947c8f4f0eb66d7fe010d4b2272c01a3d3549cf4abab2b2e7adc0f3e05ca51" exitCode=1 Jan 30 22:03:57 crc kubenswrapper[4869]: I0130 22:03:57.647656 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerDied","Data":"40947c8f4f0eb66d7fe010d4b2272c01a3d3549cf4abab2b2e7adc0f3e05ca51"} Jan 30 22:03:57 crc kubenswrapper[4869]: I0130 22:03:57.647716 4869 scope.go:117] "RemoveContainer" containerID="bae8ee2d2ea3ae8f98b57e2c4700d1cbe1214043f47147cae81508bdb3e4db03" Jan 30 22:03:57 crc kubenswrapper[4869]: I0130 22:03:57.648161 4869 scope.go:117] "RemoveContainer" containerID="40947c8f4f0eb66d7fe010d4b2272c01a3d3549cf4abab2b2e7adc0f3e05ca51" Jan 30 22:03:57 crc kubenswrapper[4869]: I0130 22:03:57.648221 4869 scope.go:117] "RemoveContainer" containerID="df319e63bcc19246119a7f2928b40c21d1f71f40baed31a978bfe2418e5b04f5" Jan 30 22:03:57 crc kubenswrapper[4869]: E0130 22:03:57.648591 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" Jan 30 22:03:57 crc kubenswrapper[4869]: I0130 22:03:57.887624 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb8bcda-df1e-447e-a871-f993d3f8f0fe" path="/var/lib/kubelet/pods/4bb8bcda-df1e-447e-a871-f993d3f8f0fe/volumes" Jan 30 22:03:57 crc kubenswrapper[4869]: I0130 22:03:57.888863 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4588829-9e62-46b7-8de8-949d869d21b5" path="/var/lib/kubelet/pods/d4588829-9e62-46b7-8de8-949d869d21b5/volumes" Jan 30 22:03:57 crc kubenswrapper[4869]: I0130 22:03:57.957518 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-xpr5d"] Jan 30 22:03:57 crc kubenswrapper[4869]: I0130 22:03:57.963816 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-xpr5d"] Jan 30 22:03:57 crc kubenswrapper[4869]: I0130 22:03:57.986370 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 30 22:03:57 crc kubenswrapper[4869]: I0130 22:03:57.986613 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-0" podUID="e1b895b0-ebcd-4f90-ae7b-633961d007a4" containerName="cinder-scheduler" containerID="cri-o://405249c3040942893a3cc2649766ede319a616f3e6290634faa7245dc8b085f1" gracePeriod=30 Jan 30 22:03:57 crc kubenswrapper[4869]: I0130 22:03:57.986748 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-0" podUID="e1b895b0-ebcd-4f90-ae7b-633961d007a4" containerName="probe" containerID="cri-o://c324f8439d3e895528b2f2380b00c266d6c750d360d0227d35c563e523f6f44c" gracePeriod=30 Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.004048 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.004369 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-0" podUID="43e985d3-f817-41ac-919d-9625124f4fcd" containerName="cinder-backup" containerID="cri-o://9767c8399608620e9e185888f758761f7fbb19790189630f3b39670088d1f93a" gracePeriod=30 Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.004417 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-0" podUID="43e985d3-f817-41ac-919d-9625124f4fcd" containerName="probe" containerID="cri-o://22c27a6999604729fe94cc707b767ac1a410e22f58d8992b96b97c642b095f1a" gracePeriod=30 Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.030738 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.052024 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder3e67-account-delete-kpstw"] Jan 30 22:03:58 crc kubenswrapper[4869]: E0130 22:03:58.052356 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4588829-9e62-46b7-8de8-949d869d21b5" containerName="cinder-api-log" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.052377 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4588829-9e62-46b7-8de8-949d869d21b5" containerName="cinder-api-log" Jan 30 22:03:58 crc kubenswrapper[4869]: E0130 22:03:58.052398 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb8bcda-df1e-447e-a871-f993d3f8f0fe" containerName="cinder-api-log" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.052406 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb8bcda-df1e-447e-a871-f993d3f8f0fe" containerName="cinder-api-log" Jan 30 22:03:58 crc kubenswrapper[4869]: E0130 22:03:58.052424 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb8bcda-df1e-447e-a871-f993d3f8f0fe" containerName="cinder-api" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.052434 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb8bcda-df1e-447e-a871-f993d3f8f0fe" containerName="cinder-api" Jan 30 22:03:58 crc kubenswrapper[4869]: E0130 22:03:58.052444 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4588829-9e62-46b7-8de8-949d869d21b5" containerName="cinder-api" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.052453 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4588829-9e62-46b7-8de8-949d869d21b5" containerName="cinder-api" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.052600 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4588829-9e62-46b7-8de8-949d869d21b5" containerName="cinder-api-log" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.052617 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bb8bcda-df1e-447e-a871-f993d3f8f0fe" containerName="cinder-api-log" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.052639 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bb8bcda-df1e-447e-a871-f993d3f8f0fe" containerName="cinder-api" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.052649 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4588829-9e62-46b7-8de8-949d869d21b5" containerName="cinder-api" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.053238 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder3e67-account-delete-kpstw" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.059803 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.060074 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="a05330df-ce32-4e24-9c83-80d3d6851fe2" containerName="cinder-api-log" containerID="cri-o://2bb15315f2b3aab17d241c4b95c77aad9853fb4456d4eb4d2bf7998cb1d88f78" gracePeriod=30 Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.060109 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="a05330df-ce32-4e24-9c83-80d3d6851fe2" containerName="cinder-api" containerID="cri-o://0a4660d9b7201024ad830649e6f62adaf974d3b510cee5cac6e343b871616828" gracePeriod=30 Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.072424 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder3e67-account-delete-kpstw"] Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.104381 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71ab0b2b-d725-460f-84a1-3ba61d7e1861-operator-scripts\") pod \"cinder3e67-account-delete-kpstw\" (UID: \"71ab0b2b-d725-460f-84a1-3ba61d7e1861\") " pod="cinder-kuttl-tests/cinder3e67-account-delete-kpstw" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.104456 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qspsx\" (UniqueName: \"kubernetes.io/projected/71ab0b2b-d725-460f-84a1-3ba61d7e1861-kube-api-access-qspsx\") pod \"cinder3e67-account-delete-kpstw\" (UID: \"71ab0b2b-d725-460f-84a1-3ba61d7e1861\") " pod="cinder-kuttl-tests/cinder3e67-account-delete-kpstw" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.206307 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71ab0b2b-d725-460f-84a1-3ba61d7e1861-operator-scripts\") pod \"cinder3e67-account-delete-kpstw\" (UID: \"71ab0b2b-d725-460f-84a1-3ba61d7e1861\") " pod="cinder-kuttl-tests/cinder3e67-account-delete-kpstw" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.206411 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qspsx\" (UniqueName: \"kubernetes.io/projected/71ab0b2b-d725-460f-84a1-3ba61d7e1861-kube-api-access-qspsx\") pod \"cinder3e67-account-delete-kpstw\" (UID: \"71ab0b2b-d725-460f-84a1-3ba61d7e1861\") " pod="cinder-kuttl-tests/cinder3e67-account-delete-kpstw" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.207459 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71ab0b2b-d725-460f-84a1-3ba61d7e1861-operator-scripts\") pod \"cinder3e67-account-delete-kpstw\" (UID: \"71ab0b2b-d725-460f-84a1-3ba61d7e1861\") " pod="cinder-kuttl-tests/cinder3e67-account-delete-kpstw" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.234852 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qspsx\" (UniqueName: \"kubernetes.io/projected/71ab0b2b-d725-460f-84a1-3ba61d7e1861-kube-api-access-qspsx\") pod \"cinder3e67-account-delete-kpstw\" (UID: \"71ab0b2b-d725-460f-84a1-3ba61d7e1861\") " pod="cinder-kuttl-tests/cinder3e67-account-delete-kpstw" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.381295 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder3e67-account-delete-kpstw" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.630910 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder3e67-account-delete-kpstw"] Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.682945 4869 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="cinder-kuttl-tests/cinder-volume-volume1-0" secret="" err="secret \"cinder-cinder-dockercfg-6wxf9\" not found" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.682995 4869 scope.go:117] "RemoveContainer" containerID="40947c8f4f0eb66d7fe010d4b2272c01a3d3549cf4abab2b2e7adc0f3e05ca51" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.683006 4869 scope.go:117] "RemoveContainer" containerID="df319e63bcc19246119a7f2928b40c21d1f71f40baed31a978bfe2418e5b04f5" Jan 30 22:03:58 crc kubenswrapper[4869]: E0130 22:03:58.683559 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(a748ea99-7369-48cb-8983-9f41ff077f82)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.685368 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder3e67-account-delete-kpstw" event={"ID":"71ab0b2b-d725-460f-84a1-3ba61d7e1861","Type":"ContainerStarted","Data":"d4d2b49b39e9bde3ded223d20fa74e4416670720111fa4f621673ff3c5afdbfe"} Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.687445 4869 generic.go:334] "Generic (PLEG): container finished" podID="a05330df-ce32-4e24-9c83-80d3d6851fe2" containerID="2bb15315f2b3aab17d241c4b95c77aad9853fb4456d4eb4d2bf7998cb1d88f78" exitCode=143 Jan 30 22:03:58 crc kubenswrapper[4869]: I0130 22:03:58.687483 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"a05330df-ce32-4e24-9c83-80d3d6851fe2","Type":"ContainerDied","Data":"2bb15315f2b3aab17d241c4b95c77aad9853fb4456d4eb4d2bf7998cb1d88f78"} Jan 30 22:03:58 crc kubenswrapper[4869]: E0130 22:03:58.716620 4869 secret.go:188] Couldn't get secret cinder-kuttl-tests/cinder-volume-volume1-config-data: secret "cinder-volume-volume1-config-data" not found Jan 30 22:03:58 crc kubenswrapper[4869]: E0130 22:03:58.716708 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data-custom podName:a748ea99-7369-48cb-8983-9f41ff077f82 nodeName:}" failed. No retries permitted until 2026-01-30 22:03:59.216686119 +0000 UTC m=+1240.102444324 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data-custom") pod "cinder-volume-volume1-0" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82") : secret "cinder-volume-volume1-config-data" not found Jan 30 22:03:58 crc kubenswrapper[4869]: E0130 22:03:58.716790 4869 secret.go:188] Couldn't get secret cinder-kuttl-tests/cinder-scripts: secret "cinder-scripts" not found Jan 30 22:03:58 crc kubenswrapper[4869]: E0130 22:03:58.716820 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-scripts podName:a748ea99-7369-48cb-8983-9f41ff077f82 nodeName:}" failed. No retries permitted until 2026-01-30 22:03:59.216811453 +0000 UTC m=+1240.102569678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-scripts") pod "cinder-volume-volume1-0" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82") : secret "cinder-scripts" not found Jan 30 22:03:58 crc kubenswrapper[4869]: E0130 22:03:58.716872 4869 secret.go:188] Couldn't get secret cinder-kuttl-tests/cinder-config-data: secret "cinder-config-data" not found Jan 30 22:03:58 crc kubenswrapper[4869]: E0130 22:03:58.716918 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data podName:a748ea99-7369-48cb-8983-9f41ff077f82 nodeName:}" failed. No retries permitted until 2026-01-30 22:03:59.216889686 +0000 UTC m=+1240.102647711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data") pod "cinder-volume-volume1-0" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82") : secret "cinder-config-data" not found Jan 30 22:03:59 crc kubenswrapper[4869]: E0130 22:03:59.223330 4869 secret.go:188] Couldn't get secret cinder-kuttl-tests/cinder-scripts: secret "cinder-scripts" not found Jan 30 22:03:59 crc kubenswrapper[4869]: E0130 22:03:59.223711 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-scripts podName:a748ea99-7369-48cb-8983-9f41ff077f82 nodeName:}" failed. No retries permitted until 2026-01-30 22:04:00.223695499 +0000 UTC m=+1241.109453524 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-scripts") pod "cinder-volume-volume1-0" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82") : secret "cinder-scripts" not found Jan 30 22:03:59 crc kubenswrapper[4869]: E0130 22:03:59.223417 4869 secret.go:188] Couldn't get secret cinder-kuttl-tests/cinder-config-data: secret "cinder-config-data" not found Jan 30 22:03:59 crc kubenswrapper[4869]: E0130 22:03:59.223873 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data podName:a748ea99-7369-48cb-8983-9f41ff077f82 nodeName:}" failed. No retries permitted until 2026-01-30 22:04:00.223864375 +0000 UTC m=+1241.109622400 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data") pod "cinder-volume-volume1-0" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82") : secret "cinder-config-data" not found Jan 30 22:03:59 crc kubenswrapper[4869]: E0130 22:03:59.223474 4869 secret.go:188] Couldn't get secret cinder-kuttl-tests/cinder-volume-volume1-config-data: secret "cinder-volume-volume1-config-data" not found Jan 30 22:03:59 crc kubenswrapper[4869]: E0130 22:03:59.223981 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data-custom podName:a748ea99-7369-48cb-8983-9f41ff077f82 nodeName:}" failed. No retries permitted until 2026-01-30 22:04:00.223974898 +0000 UTC m=+1241.109732923 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data-custom") pod "cinder-volume-volume1-0" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82") : secret "cinder-volume-volume1-config-data" not found Jan 30 22:03:59 crc kubenswrapper[4869]: I0130 22:03:59.696296 4869 generic.go:334] "Generic (PLEG): container finished" podID="43e985d3-f817-41ac-919d-9625124f4fcd" containerID="22c27a6999604729fe94cc707b767ac1a410e22f58d8992b96b97c642b095f1a" exitCode=0 Jan 30 22:03:59 crc kubenswrapper[4869]: I0130 22:03:59.696370 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"43e985d3-f817-41ac-919d-9625124f4fcd","Type":"ContainerDied","Data":"22c27a6999604729fe94cc707b767ac1a410e22f58d8992b96b97c642b095f1a"} Jan 30 22:03:59 crc kubenswrapper[4869]: I0130 22:03:59.698296 4869 generic.go:334] "Generic (PLEG): container finished" podID="71ab0b2b-d725-460f-84a1-3ba61d7e1861" containerID="7650c35a76197836c7579269a8aca153e89b63434ae7f559706e7f94bb5949e7" exitCode=0 Jan 30 22:03:59 crc kubenswrapper[4869]: I0130 22:03:59.698380 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder3e67-account-delete-kpstw" event={"ID":"71ab0b2b-d725-460f-84a1-3ba61d7e1861","Type":"ContainerDied","Data":"7650c35a76197836c7579269a8aca153e89b63434ae7f559706e7f94bb5949e7"} Jan 30 22:03:59 crc kubenswrapper[4869]: I0130 22:03:59.700872 4869 generic.go:334] "Generic (PLEG): container finished" podID="e1b895b0-ebcd-4f90-ae7b-633961d007a4" containerID="c324f8439d3e895528b2f2380b00c266d6c750d360d0227d35c563e523f6f44c" exitCode=0 Jan 30 22:03:59 crc kubenswrapper[4869]: I0130 22:03:59.700909 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"e1b895b0-ebcd-4f90-ae7b-633961d007a4","Type":"ContainerDied","Data":"c324f8439d3e895528b2f2380b00c266d6c750d360d0227d35c563e523f6f44c"} Jan 30 22:03:59 crc kubenswrapper[4869]: I0130 22:03:59.889460 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73d220bd-14d6-4146-a2a4-bad4060711bb" path="/var/lib/kubelet/pods/73d220bd-14d6-4146-a2a4-bad4060711bb/volumes" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.025063 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.135139 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-476n9\" (UniqueName: \"kubernetes.io/projected/a748ea99-7369-48cb-8983-9f41ff077f82-kube-api-access-476n9\") pod \"a748ea99-7369-48cb-8983-9f41ff077f82\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.135192 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-sys\") pod \"a748ea99-7369-48cb-8983-9f41ff077f82\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.135233 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-lib-modules\") pod \"a748ea99-7369-48cb-8983-9f41ff077f82\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.135268 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-run\") pod \"a748ea99-7369-48cb-8983-9f41ff077f82\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.135287 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-etc-machine-id\") pod \"a748ea99-7369-48cb-8983-9f41ff077f82\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.135333 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-var-locks-brick\") pod \"a748ea99-7369-48cb-8983-9f41ff077f82\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.135355 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-var-locks-cinder\") pod \"a748ea99-7369-48cb-8983-9f41ff077f82\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.135399 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data-custom\") pod \"a748ea99-7369-48cb-8983-9f41ff077f82\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.135441 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-dev\") pod \"a748ea99-7369-48cb-8983-9f41ff077f82\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.135557 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data\") pod \"a748ea99-7369-48cb-8983-9f41ff077f82\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.135581 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-scripts\") pod \"a748ea99-7369-48cb-8983-9f41ff077f82\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.135611 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-etc-nvme\") pod \"a748ea99-7369-48cb-8983-9f41ff077f82\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.135633 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-etc-iscsi\") pod \"a748ea99-7369-48cb-8983-9f41ff077f82\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.135650 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-var-lib-cinder\") pod \"a748ea99-7369-48cb-8983-9f41ff077f82\" (UID: \"a748ea99-7369-48cb-8983-9f41ff077f82\") " Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.135919 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "a748ea99-7369-48cb-8983-9f41ff077f82" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.135988 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-sys" (OuterVolumeSpecName: "sys") pod "a748ea99-7369-48cb-8983-9f41ff077f82" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.136015 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a748ea99-7369-48cb-8983-9f41ff077f82" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.136041 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-run" (OuterVolumeSpecName: "run") pod "a748ea99-7369-48cb-8983-9f41ff077f82" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.136063 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a748ea99-7369-48cb-8983-9f41ff077f82" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.136188 4869 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.136216 4869 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.136227 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.136241 4869 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.136253 4869 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-sys\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.136289 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "a748ea99-7369-48cb-8983-9f41ff077f82" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.136424 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "a748ea99-7369-48cb-8983-9f41ff077f82" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.136523 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "a748ea99-7369-48cb-8983-9f41ff077f82" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.136552 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "a748ea99-7369-48cb-8983-9f41ff077f82" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.136576 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-dev" (OuterVolumeSpecName: "dev") pod "a748ea99-7369-48cb-8983-9f41ff077f82" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.141844 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a748ea99-7369-48cb-8983-9f41ff077f82-kube-api-access-476n9" (OuterVolumeSpecName: "kube-api-access-476n9") pod "a748ea99-7369-48cb-8983-9f41ff077f82" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82"). InnerVolumeSpecName "kube-api-access-476n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.143762 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a748ea99-7369-48cb-8983-9f41ff077f82" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.144888 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-scripts" (OuterVolumeSpecName: "scripts") pod "a748ea99-7369-48cb-8983-9f41ff077f82" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.215782 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data" (OuterVolumeSpecName: "config-data") pod "a748ea99-7369-48cb-8983-9f41ff077f82" (UID: "a748ea99-7369-48cb-8983-9f41ff077f82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.237501 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.237537 4869 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-dev\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.237549 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.237561 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a748ea99-7369-48cb-8983-9f41ff077f82-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.237572 4869 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.237582 4869 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.237592 4869 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.237606 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-476n9\" (UniqueName: \"kubernetes.io/projected/a748ea99-7369-48cb-8983-9f41ff077f82-kube-api-access-476n9\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.237616 4869 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a748ea99-7369-48cb-8983-9f41ff077f82-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.708727 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"a748ea99-7369-48cb-8983-9f41ff077f82","Type":"ContainerDied","Data":"6425717b7bdfb81b8482d5f4a1e1b4b8c1e4a02b23dbd00f5d2e892e3ed8d127"} Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.708791 4869 scope.go:117] "RemoveContainer" containerID="df319e63bcc19246119a7f2928b40c21d1f71f40baed31a978bfe2418e5b04f5" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.708756 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.744519 4869 scope.go:117] "RemoveContainer" containerID="40947c8f4f0eb66d7fe010d4b2272c01a3d3549cf4abab2b2e7adc0f3e05ca51" Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.779390 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 30 22:04:00 crc kubenswrapper[4869]: I0130 22:04:00.783977 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.005805 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder3e67-account-delete-kpstw" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.150560 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qspsx\" (UniqueName: \"kubernetes.io/projected/71ab0b2b-d725-460f-84a1-3ba61d7e1861-kube-api-access-qspsx\") pod \"71ab0b2b-d725-460f-84a1-3ba61d7e1861\" (UID: \"71ab0b2b-d725-460f-84a1-3ba61d7e1861\") " Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.150866 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71ab0b2b-d725-460f-84a1-3ba61d7e1861-operator-scripts\") pod \"71ab0b2b-d725-460f-84a1-3ba61d7e1861\" (UID: \"71ab0b2b-d725-460f-84a1-3ba61d7e1861\") " Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.153373 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ab0b2b-d725-460f-84a1-3ba61d7e1861-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "71ab0b2b-d725-460f-84a1-3ba61d7e1861" (UID: "71ab0b2b-d725-460f-84a1-3ba61d7e1861"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.156689 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ab0b2b-d725-460f-84a1-3ba61d7e1861-kube-api-access-qspsx" (OuterVolumeSpecName: "kube-api-access-qspsx") pod "71ab0b2b-d725-460f-84a1-3ba61d7e1861" (UID: "71ab0b2b-d725-460f-84a1-3ba61d7e1861"). InnerVolumeSpecName "kube-api-access-qspsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.249059 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-0" podUID="a05330df-ce32-4e24-9c83-80d3d6851fe2" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.99:8776/healthcheck\": read tcp 10.217.0.2:57666->10.217.0.99:8776: read: connection reset by peer" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.252991 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71ab0b2b-d725-460f-84a1-3ba61d7e1861-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.253022 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qspsx\" (UniqueName: \"kubernetes.io/projected/71ab0b2b-d725-460f-84a1-3ba61d7e1861-kube-api-access-qspsx\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.575696 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.716480 4869 generic.go:334] "Generic (PLEG): container finished" podID="a05330df-ce32-4e24-9c83-80d3d6851fe2" containerID="0a4660d9b7201024ad830649e6f62adaf974d3b510cee5cac6e343b871616828" exitCode=0 Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.716541 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"a05330df-ce32-4e24-9c83-80d3d6851fe2","Type":"ContainerDied","Data":"0a4660d9b7201024ad830649e6f62adaf974d3b510cee5cac6e343b871616828"} Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.716568 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"a05330df-ce32-4e24-9c83-80d3d6851fe2","Type":"ContainerDied","Data":"13005a6de2acb656d412ffcc3e9e063f8a295b3305b3b75b027b3a04b7529760"} Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.716611 4869 scope.go:117] "RemoveContainer" containerID="0a4660d9b7201024ad830649e6f62adaf974d3b510cee5cac6e343b871616828" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.716709 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.719471 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder3e67-account-delete-kpstw" event={"ID":"71ab0b2b-d725-460f-84a1-3ba61d7e1861","Type":"ContainerDied","Data":"d4d2b49b39e9bde3ded223d20fa74e4416670720111fa4f621673ff3c5afdbfe"} Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.719517 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4d2b49b39e9bde3ded223d20fa74e4416670720111fa4f621673ff3c5afdbfe" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.719572 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder3e67-account-delete-kpstw" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.725968 4869 generic.go:334] "Generic (PLEG): container finished" podID="e1b895b0-ebcd-4f90-ae7b-633961d007a4" containerID="405249c3040942893a3cc2649766ede319a616f3e6290634faa7245dc8b085f1" exitCode=0 Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.726012 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"e1b895b0-ebcd-4f90-ae7b-633961d007a4","Type":"ContainerDied","Data":"405249c3040942893a3cc2649766ede319a616f3e6290634faa7245dc8b085f1"} Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.744960 4869 scope.go:117] "RemoveContainer" containerID="2bb15315f2b3aab17d241c4b95c77aad9853fb4456d4eb4d2bf7998cb1d88f78" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.760476 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-config-data\") pod \"a05330df-ce32-4e24-9c83-80d3d6851fe2\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.760785 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a05330df-ce32-4e24-9c83-80d3d6851fe2-logs\") pod \"a05330df-ce32-4e24-9c83-80d3d6851fe2\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.761008 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-config-data-custom\") pod \"a05330df-ce32-4e24-9c83-80d3d6851fe2\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.761122 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg92z\" (UniqueName: \"kubernetes.io/projected/a05330df-ce32-4e24-9c83-80d3d6851fe2-kube-api-access-gg92z\") pod \"a05330df-ce32-4e24-9c83-80d3d6851fe2\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.761310 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-scripts\") pod \"a05330df-ce32-4e24-9c83-80d3d6851fe2\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.761506 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a05330df-ce32-4e24-9c83-80d3d6851fe2-etc-machine-id\") pod \"a05330df-ce32-4e24-9c83-80d3d6851fe2\" (UID: \"a05330df-ce32-4e24-9c83-80d3d6851fe2\") " Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.761700 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a05330df-ce32-4e24-9c83-80d3d6851fe2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a05330df-ce32-4e24-9c83-80d3d6851fe2" (UID: "a05330df-ce32-4e24-9c83-80d3d6851fe2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.761838 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a05330df-ce32-4e24-9c83-80d3d6851fe2-logs" (OuterVolumeSpecName: "logs") pod "a05330df-ce32-4e24-9c83-80d3d6851fe2" (UID: "a05330df-ce32-4e24-9c83-80d3d6851fe2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.762208 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a05330df-ce32-4e24-9c83-80d3d6851fe2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.762355 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a05330df-ce32-4e24-9c83-80d3d6851fe2-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.764360 4869 scope.go:117] "RemoveContainer" containerID="0a4660d9b7201024ad830649e6f62adaf974d3b510cee5cac6e343b871616828" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.764402 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-scripts" (OuterVolumeSpecName: "scripts") pod "a05330df-ce32-4e24-9c83-80d3d6851fe2" (UID: "a05330df-ce32-4e24-9c83-80d3d6851fe2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.764851 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a05330df-ce32-4e24-9c83-80d3d6851fe2-kube-api-access-gg92z" (OuterVolumeSpecName: "kube-api-access-gg92z") pod "a05330df-ce32-4e24-9c83-80d3d6851fe2" (UID: "a05330df-ce32-4e24-9c83-80d3d6851fe2"). InnerVolumeSpecName "kube-api-access-gg92z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4869]: E0130 22:04:01.764879 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a4660d9b7201024ad830649e6f62adaf974d3b510cee5cac6e343b871616828\": container with ID starting with 0a4660d9b7201024ad830649e6f62adaf974d3b510cee5cac6e343b871616828 not found: ID does not exist" containerID="0a4660d9b7201024ad830649e6f62adaf974d3b510cee5cac6e343b871616828" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.764942 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a4660d9b7201024ad830649e6f62adaf974d3b510cee5cac6e343b871616828"} err="failed to get container status \"0a4660d9b7201024ad830649e6f62adaf974d3b510cee5cac6e343b871616828\": rpc error: code = NotFound desc = could not find container \"0a4660d9b7201024ad830649e6f62adaf974d3b510cee5cac6e343b871616828\": container with ID starting with 0a4660d9b7201024ad830649e6f62adaf974d3b510cee5cac6e343b871616828 not found: ID does not exist" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.764975 4869 scope.go:117] "RemoveContainer" containerID="2bb15315f2b3aab17d241c4b95c77aad9853fb4456d4eb4d2bf7998cb1d88f78" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.765151 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a05330df-ce32-4e24-9c83-80d3d6851fe2" (UID: "a05330df-ce32-4e24-9c83-80d3d6851fe2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4869]: E0130 22:04:01.765465 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bb15315f2b3aab17d241c4b95c77aad9853fb4456d4eb4d2bf7998cb1d88f78\": container with ID starting with 2bb15315f2b3aab17d241c4b95c77aad9853fb4456d4eb4d2bf7998cb1d88f78 not found: ID does not exist" containerID="2bb15315f2b3aab17d241c4b95c77aad9853fb4456d4eb4d2bf7998cb1d88f78" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.765496 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb15315f2b3aab17d241c4b95c77aad9853fb4456d4eb4d2bf7998cb1d88f78"} err="failed to get container status \"2bb15315f2b3aab17d241c4b95c77aad9853fb4456d4eb4d2bf7998cb1d88f78\": rpc error: code = NotFound desc = could not find container \"2bb15315f2b3aab17d241c4b95c77aad9853fb4456d4eb4d2bf7998cb1d88f78\": container with ID starting with 2bb15315f2b3aab17d241c4b95c77aad9853fb4456d4eb4d2bf7998cb1d88f78 not found: ID does not exist" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.789754 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-config-data" (OuterVolumeSpecName: "config-data") pod "a05330df-ce32-4e24-9c83-80d3d6851fe2" (UID: "a05330df-ce32-4e24-9c83-80d3d6851fe2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.863957 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.863995 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.864005 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg92z\" (UniqueName: \"kubernetes.io/projected/a05330df-ce32-4e24-9c83-80d3d6851fe2-kube-api-access-gg92z\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.864016 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a05330df-ce32-4e24-9c83-80d3d6851fe2-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.890600 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" path="/var/lib/kubelet/pods/a748ea99-7369-48cb-8983-9f41ff077f82/volumes" Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.990404 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:04:01 crc kubenswrapper[4869]: I0130 22:04:01.990490 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.049806 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.061170 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.170066 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.269861 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b895b0-ebcd-4f90-ae7b-633961d007a4-config-data\") pod \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.269973 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1b895b0-ebcd-4f90-ae7b-633961d007a4-config-data-custom\") pod \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.270073 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzcv4\" (UniqueName: \"kubernetes.io/projected/e1b895b0-ebcd-4f90-ae7b-633961d007a4-kube-api-access-xzcv4\") pod \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.270113 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1b895b0-ebcd-4f90-ae7b-633961d007a4-etc-machine-id\") pod \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.270206 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1b895b0-ebcd-4f90-ae7b-633961d007a4-scripts\") pod \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\" (UID: \"e1b895b0-ebcd-4f90-ae7b-633961d007a4\") " Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.270346 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1b895b0-ebcd-4f90-ae7b-633961d007a4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e1b895b0-ebcd-4f90-ae7b-633961d007a4" (UID: "e1b895b0-ebcd-4f90-ae7b-633961d007a4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.270755 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1b895b0-ebcd-4f90-ae7b-633961d007a4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.273234 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b895b0-ebcd-4f90-ae7b-633961d007a4-scripts" (OuterVolumeSpecName: "scripts") pod "e1b895b0-ebcd-4f90-ae7b-633961d007a4" (UID: "e1b895b0-ebcd-4f90-ae7b-633961d007a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.273436 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1b895b0-ebcd-4f90-ae7b-633961d007a4-kube-api-access-xzcv4" (OuterVolumeSpecName: "kube-api-access-xzcv4") pod "e1b895b0-ebcd-4f90-ae7b-633961d007a4" (UID: "e1b895b0-ebcd-4f90-ae7b-633961d007a4"). InnerVolumeSpecName "kube-api-access-xzcv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.274024 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b895b0-ebcd-4f90-ae7b-633961d007a4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e1b895b0-ebcd-4f90-ae7b-633961d007a4" (UID: "e1b895b0-ebcd-4f90-ae7b-633961d007a4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.328547 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b895b0-ebcd-4f90-ae7b-633961d007a4-config-data" (OuterVolumeSpecName: "config-data") pod "e1b895b0-ebcd-4f90-ae7b-633961d007a4" (UID: "e1b895b0-ebcd-4f90-ae7b-633961d007a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.373298 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b895b0-ebcd-4f90-ae7b-633961d007a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.373356 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1b895b0-ebcd-4f90-ae7b-633961d007a4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.373379 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzcv4\" (UniqueName: \"kubernetes.io/projected/e1b895b0-ebcd-4f90-ae7b-633961d007a4-kube-api-access-xzcv4\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.373395 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1b895b0-ebcd-4f90-ae7b-633961d007a4-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.735747 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"e1b895b0-ebcd-4f90-ae7b-633961d007a4","Type":"ContainerDied","Data":"25a0b9c206622d850461d1056d024e8af526d8bc266a8178484d3522fd3e6bb2"} Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.735793 4869 scope.go:117] "RemoveContainer" containerID="c324f8439d3e895528b2f2380b00c266d6c750d360d0227d35c563e523f6f44c" Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.735938 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.759610 4869 scope.go:117] "RemoveContainer" containerID="405249c3040942893a3cc2649766ede319a616f3e6290634faa7245dc8b085f1" Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.771399 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 30 22:04:02 crc kubenswrapper[4869]: I0130 22:04:02.775998 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.107395 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-5zk6x"] Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.112714 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-5zk6x"] Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.119122 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-3e67-account-create-update-zcf8m"] Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.127670 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder3e67-account-delete-kpstw"] Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.132466 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-3e67-account-create-update-zcf8m"] Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.137226 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder3e67-account-delete-kpstw"] Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.184125 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-db-create-lpvxm"] Jan 30 22:04:03 crc kubenswrapper[4869]: E0130 22:04:03.185372 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="cinder-volume" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185392 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="cinder-volume" Jan 30 22:04:03 crc kubenswrapper[4869]: E0130 22:04:03.185403 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="probe" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185410 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="probe" Jan 30 22:04:03 crc kubenswrapper[4869]: E0130 22:04:03.185417 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="probe" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185423 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="probe" Jan 30 22:04:03 crc kubenswrapper[4869]: E0130 22:04:03.185431 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="cinder-volume" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185439 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="cinder-volume" Jan 30 22:04:03 crc kubenswrapper[4869]: E0130 22:04:03.185449 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a05330df-ce32-4e24-9c83-80d3d6851fe2" containerName="cinder-api-log" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185455 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a05330df-ce32-4e24-9c83-80d3d6851fe2" containerName="cinder-api-log" Jan 30 22:04:03 crc kubenswrapper[4869]: E0130 22:04:03.185469 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b895b0-ebcd-4f90-ae7b-633961d007a4" containerName="cinder-scheduler" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185475 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b895b0-ebcd-4f90-ae7b-633961d007a4" containerName="cinder-scheduler" Jan 30 22:04:03 crc kubenswrapper[4869]: E0130 22:04:03.185485 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="probe" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185491 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="probe" Jan 30 22:04:03 crc kubenswrapper[4869]: E0130 22:04:03.185497 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b895b0-ebcd-4f90-ae7b-633961d007a4" containerName="probe" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185504 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b895b0-ebcd-4f90-ae7b-633961d007a4" containerName="probe" Jan 30 22:04:03 crc kubenswrapper[4869]: E0130 22:04:03.185518 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ab0b2b-d725-460f-84a1-3ba61d7e1861" containerName="mariadb-account-delete" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185524 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ab0b2b-d725-460f-84a1-3ba61d7e1861" containerName="mariadb-account-delete" Jan 30 22:04:03 crc kubenswrapper[4869]: E0130 22:04:03.185535 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="cinder-volume" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185541 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="cinder-volume" Jan 30 22:04:03 crc kubenswrapper[4869]: E0130 22:04:03.185549 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="probe" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185555 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="probe" Jan 30 22:04:03 crc kubenswrapper[4869]: E0130 22:04:03.185564 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a05330df-ce32-4e24-9c83-80d3d6851fe2" containerName="cinder-api" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185571 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a05330df-ce32-4e24-9c83-80d3d6851fe2" containerName="cinder-api" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185685 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="cinder-volume" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185702 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="cinder-volume" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185709 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a05330df-ce32-4e24-9c83-80d3d6851fe2" containerName="cinder-api" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185718 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="probe" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185726 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1b895b0-ebcd-4f90-ae7b-633961d007a4" containerName="probe" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185737 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="probe" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185746 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="probe" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185755 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1b895b0-ebcd-4f90-ae7b-633961d007a4" containerName="cinder-scheduler" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185764 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a05330df-ce32-4e24-9c83-80d3d6851fe2" containerName="cinder-api-log" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185772 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ab0b2b-d725-460f-84a1-3ba61d7e1861" containerName="mariadb-account-delete" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185781 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="probe" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185787 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="cinder-volume" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.185797 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="cinder-volume" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.186306 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-lpvxm" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.190211 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-lpvxm"] Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.287731 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd81195d-554e-4f5f-8aae-4513a7c31542-operator-scripts\") pod \"cinder-db-create-lpvxm\" (UID: \"bd81195d-554e-4f5f-8aae-4513a7c31542\") " pod="cinder-kuttl-tests/cinder-db-create-lpvxm" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.287804 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwxhp\" (UniqueName: \"kubernetes.io/projected/bd81195d-554e-4f5f-8aae-4513a7c31542-kube-api-access-rwxhp\") pod \"cinder-db-create-lpvxm\" (UID: \"bd81195d-554e-4f5f-8aae-4513a7c31542\") " pod="cinder-kuttl-tests/cinder-db-create-lpvxm" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.288779 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn"] Jan 30 22:04:03 crc kubenswrapper[4869]: E0130 22:04:03.289076 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="probe" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.289097 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="probe" Jan 30 22:04:03 crc kubenswrapper[4869]: E0130 22:04:03.289118 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="cinder-volume" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.289126 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="cinder-volume" Jan 30 22:04:03 crc kubenswrapper[4869]: E0130 22:04:03.289142 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="cinder-volume" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.289150 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="cinder-volume" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.289322 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="cinder-volume" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.289351 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a748ea99-7369-48cb-8983-9f41ff077f82" containerName="probe" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.289859 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.292099 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-db-secret" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.306885 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn"] Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.388658 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3baac441-39df-4456-a3dd-2bf91773c4aa-operator-scripts\") pod \"cinder-b570-account-create-update-9sfxn\" (UID: \"3baac441-39df-4456-a3dd-2bf91773c4aa\") " pod="cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.389043 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfkrc\" (UniqueName: \"kubernetes.io/projected/3baac441-39df-4456-a3dd-2bf91773c4aa-kube-api-access-qfkrc\") pod \"cinder-b570-account-create-update-9sfxn\" (UID: \"3baac441-39df-4456-a3dd-2bf91773c4aa\") " pod="cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.389233 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd81195d-554e-4f5f-8aae-4513a7c31542-operator-scripts\") pod \"cinder-db-create-lpvxm\" (UID: \"bd81195d-554e-4f5f-8aae-4513a7c31542\") " pod="cinder-kuttl-tests/cinder-db-create-lpvxm" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.389336 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwxhp\" (UniqueName: \"kubernetes.io/projected/bd81195d-554e-4f5f-8aae-4513a7c31542-kube-api-access-rwxhp\") pod \"cinder-db-create-lpvxm\" (UID: \"bd81195d-554e-4f5f-8aae-4513a7c31542\") " pod="cinder-kuttl-tests/cinder-db-create-lpvxm" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.390020 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd81195d-554e-4f5f-8aae-4513a7c31542-operator-scripts\") pod \"cinder-db-create-lpvxm\" (UID: \"bd81195d-554e-4f5f-8aae-4513a7c31542\") " pod="cinder-kuttl-tests/cinder-db-create-lpvxm" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.427573 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwxhp\" (UniqueName: \"kubernetes.io/projected/bd81195d-554e-4f5f-8aae-4513a7c31542-kube-api-access-rwxhp\") pod \"cinder-db-create-lpvxm\" (UID: \"bd81195d-554e-4f5f-8aae-4513a7c31542\") " pod="cinder-kuttl-tests/cinder-db-create-lpvxm" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.491006 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3baac441-39df-4456-a3dd-2bf91773c4aa-operator-scripts\") pod \"cinder-b570-account-create-update-9sfxn\" (UID: \"3baac441-39df-4456-a3dd-2bf91773c4aa\") " pod="cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.491067 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfkrc\" (UniqueName: \"kubernetes.io/projected/3baac441-39df-4456-a3dd-2bf91773c4aa-kube-api-access-qfkrc\") pod \"cinder-b570-account-create-update-9sfxn\" (UID: \"3baac441-39df-4456-a3dd-2bf91773c4aa\") " pod="cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.492009 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3baac441-39df-4456-a3dd-2bf91773c4aa-operator-scripts\") pod \"cinder-b570-account-create-update-9sfxn\" (UID: \"3baac441-39df-4456-a3dd-2bf91773c4aa\") " pod="cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.511667 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfkrc\" (UniqueName: \"kubernetes.io/projected/3baac441-39df-4456-a3dd-2bf91773c4aa-kube-api-access-qfkrc\") pod \"cinder-b570-account-create-update-9sfxn\" (UID: \"3baac441-39df-4456-a3dd-2bf91773c4aa\") " pod="cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.538657 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-lpvxm" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.605165 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.747221 4869 generic.go:334] "Generic (PLEG): container finished" podID="43e985d3-f817-41ac-919d-9625124f4fcd" containerID="9767c8399608620e9e185888f758761f7fbb19790189630f3b39670088d1f93a" exitCode=0 Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.747319 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"43e985d3-f817-41ac-919d-9625124f4fcd","Type":"ContainerDied","Data":"9767c8399608620e9e185888f758761f7fbb19790189630f3b39670088d1f93a"} Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.843515 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn"] Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.893271 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71ab0b2b-d725-460f-84a1-3ba61d7e1861" path="/var/lib/kubelet/pods/71ab0b2b-d725-460f-84a1-3ba61d7e1861/volumes" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.894401 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a05330df-ce32-4e24-9c83-80d3d6851fe2" path="/var/lib/kubelet/pods/a05330df-ce32-4e24-9c83-80d3d6851fe2/volumes" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.895240 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1d0d236-2728-44cf-b723-8de09f743481" path="/var/lib/kubelet/pods/c1d0d236-2728-44cf-b723-8de09f743481/volumes" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.897355 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1b895b0-ebcd-4f90-ae7b-633961d007a4" path="/var/lib/kubelet/pods/e1b895b0-ebcd-4f90-ae7b-633961d007a4/volumes" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.898037 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5266478-3a00-466d-8e24-6a9e0d3edb29" path="/var/lib/kubelet/pods/e5266478-3a00-466d-8e24-6a9e0d3edb29/volumes" Jan 30 22:04:03 crc kubenswrapper[4869]: I0130 22:04:03.982071 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-lpvxm"] Jan 30 22:04:03 crc kubenswrapper[4869]: W0130 22:04:03.984677 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd81195d_554e_4f5f_8aae_4513a7c31542.slice/crio-b3e6a4c066da99223e102f24b09215524303b01d197ec106c0c8ba67fbe13e27 WatchSource:0}: Error finding container b3e6a4c066da99223e102f24b09215524303b01d197ec106c0c8ba67fbe13e27: Status 404 returned error can't find the container with id b3e6a4c066da99223e102f24b09215524303b01d197ec106c0c8ba67fbe13e27 Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.183719 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305216 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-lib-modules\") pod \"43e985d3-f817-41ac-919d-9625124f4fcd\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305272 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-dev\") pod \"43e985d3-f817-41ac-919d-9625124f4fcd\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305290 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-etc-machine-id\") pod \"43e985d3-f817-41ac-919d-9625124f4fcd\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305334 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43e985d3-f817-41ac-919d-9625124f4fcd-scripts\") pod \"43e985d3-f817-41ac-919d-9625124f4fcd\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305367 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-etc-nvme\") pod \"43e985d3-f817-41ac-919d-9625124f4fcd\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305387 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "43e985d3-f817-41ac-919d-9625124f4fcd" (UID: "43e985d3-f817-41ac-919d-9625124f4fcd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305399 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-sys\") pod \"43e985d3-f817-41ac-919d-9625124f4fcd\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305387 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-dev" (OuterVolumeSpecName: "dev") pod "43e985d3-f817-41ac-919d-9625124f4fcd" (UID: "43e985d3-f817-41ac-919d-9625124f4fcd"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305416 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "43e985d3-f817-41ac-919d-9625124f4fcd" (UID: "43e985d3-f817-41ac-919d-9625124f4fcd"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305410 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "43e985d3-f817-41ac-919d-9625124f4fcd" (UID: "43e985d3-f817-41ac-919d-9625124f4fcd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305440 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-var-lib-cinder\") pod \"43e985d3-f817-41ac-919d-9625124f4fcd\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305449 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-sys" (OuterVolumeSpecName: "sys") pod "43e985d3-f817-41ac-919d-9625124f4fcd" (UID: "43e985d3-f817-41ac-919d-9625124f4fcd"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305466 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsrgv\" (UniqueName: \"kubernetes.io/projected/43e985d3-f817-41ac-919d-9625124f4fcd-kube-api-access-dsrgv\") pod \"43e985d3-f817-41ac-919d-9625124f4fcd\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305495 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e985d3-f817-41ac-919d-9625124f4fcd-config-data\") pod \"43e985d3-f817-41ac-919d-9625124f4fcd\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305506 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "43e985d3-f817-41ac-919d-9625124f4fcd" (UID: "43e985d3-f817-41ac-919d-9625124f4fcd"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305537 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-var-locks-brick\") pod \"43e985d3-f817-41ac-919d-9625124f4fcd\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305554 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43e985d3-f817-41ac-919d-9625124f4fcd-config-data-custom\") pod \"43e985d3-f817-41ac-919d-9625124f4fcd\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305569 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-etc-iscsi\") pod \"43e985d3-f817-41ac-919d-9625124f4fcd\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305593 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-run\") pod \"43e985d3-f817-41ac-919d-9625124f4fcd\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305615 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-var-locks-cinder\") pod \"43e985d3-f817-41ac-919d-9625124f4fcd\" (UID: \"43e985d3-f817-41ac-919d-9625124f4fcd\") " Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305650 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "43e985d3-f817-41ac-919d-9625124f4fcd" (UID: "43e985d3-f817-41ac-919d-9625124f4fcd"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305677 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "43e985d3-f817-41ac-919d-9625124f4fcd" (UID: "43e985d3-f817-41ac-919d-9625124f4fcd"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305701 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-run" (OuterVolumeSpecName: "run") pod "43e985d3-f817-41ac-919d-9625124f4fcd" (UID: "43e985d3-f817-41ac-919d-9625124f4fcd"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305775 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "43e985d3-f817-41ac-919d-9625124f4fcd" (UID: "43e985d3-f817-41ac-919d-9625124f4fcd"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305935 4869 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305949 4869 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305959 4869 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305968 4869 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-dev\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305976 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305985 4869 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.305992 4869 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-sys\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.306001 4869 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.306011 4869 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.306019 4869 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/43e985d3-f817-41ac-919d-9625124f4fcd-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.310928 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43e985d3-f817-41ac-919d-9625124f4fcd-scripts" (OuterVolumeSpecName: "scripts") pod "43e985d3-f817-41ac-919d-9625124f4fcd" (UID: "43e985d3-f817-41ac-919d-9625124f4fcd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.311325 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43e985d3-f817-41ac-919d-9625124f4fcd-kube-api-access-dsrgv" (OuterVolumeSpecName: "kube-api-access-dsrgv") pod "43e985d3-f817-41ac-919d-9625124f4fcd" (UID: "43e985d3-f817-41ac-919d-9625124f4fcd"). InnerVolumeSpecName "kube-api-access-dsrgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.316976 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43e985d3-f817-41ac-919d-9625124f4fcd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "43e985d3-f817-41ac-919d-9625124f4fcd" (UID: "43e985d3-f817-41ac-919d-9625124f4fcd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.361701 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43e985d3-f817-41ac-919d-9625124f4fcd-config-data" (OuterVolumeSpecName: "config-data") pod "43e985d3-f817-41ac-919d-9625124f4fcd" (UID: "43e985d3-f817-41ac-919d-9625124f4fcd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.407535 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43e985d3-f817-41ac-919d-9625124f4fcd-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.408114 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsrgv\" (UniqueName: \"kubernetes.io/projected/43e985d3-f817-41ac-919d-9625124f4fcd-kube-api-access-dsrgv\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.408315 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e985d3-f817-41ac-919d-9625124f4fcd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.408401 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43e985d3-f817-41ac-919d-9625124f4fcd-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.757378 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn" event={"ID":"3baac441-39df-4456-a3dd-2bf91773c4aa","Type":"ContainerStarted","Data":"3e6caf84787a54d79e5bf12ae642db88009e24419803d6a191b5375af5acbe69"} Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.757784 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn" event={"ID":"3baac441-39df-4456-a3dd-2bf91773c4aa","Type":"ContainerStarted","Data":"2c68cdff3e55e0f389390b44e80b4986d5db4f6058095bbdd10e8837d9b20303"} Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.759939 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-lpvxm" event={"ID":"bd81195d-554e-4f5f-8aae-4513a7c31542","Type":"ContainerStarted","Data":"1e1984c3f29e264a9568411e7643d013e896efadf4da3caebab064a8302bf46b"} Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.760060 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-lpvxm" event={"ID":"bd81195d-554e-4f5f-8aae-4513a7c31542","Type":"ContainerStarted","Data":"b3e6a4c066da99223e102f24b09215524303b01d197ec106c0c8ba67fbe13e27"} Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.762122 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"43e985d3-f817-41ac-919d-9625124f4fcd","Type":"ContainerDied","Data":"ee640bfd32cda0727ac5e4fbdf83c88e869db885377249347fed4136c448b38e"} Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.762199 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.762326 4869 scope.go:117] "RemoveContainer" containerID="22c27a6999604729fe94cc707b767ac1a410e22f58d8992b96b97c642b095f1a" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.788164 4869 scope.go:117] "RemoveContainer" containerID="9767c8399608620e9e185888f758761f7fbb19790189630f3b39670088d1f93a" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.808537 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-db-create-lpvxm" podStartSLOduration=1.808509701 podStartE2EDuration="1.808509701s" podCreationTimestamp="2026-01-30 22:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:04.802395739 +0000 UTC m=+1245.688153774" watchObservedRunningTime="2026-01-30 22:04:04.808509701 +0000 UTC m=+1245.694267746" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.810220 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn" podStartSLOduration=1.810210004 podStartE2EDuration="1.810210004s" podCreationTimestamp="2026-01-30 22:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:04.780294326 +0000 UTC m=+1245.666052361" watchObservedRunningTime="2026-01-30 22:04:04.810210004 +0000 UTC m=+1245.695968039" Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.823616 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 30 22:04:04 crc kubenswrapper[4869]: I0130 22:04:04.829398 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 30 22:04:05 crc kubenswrapper[4869]: I0130 22:04:05.884335 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43e985d3-f817-41ac-919d-9625124f4fcd" path="/var/lib/kubelet/pods/43e985d3-f817-41ac-919d-9625124f4fcd/volumes" Jan 30 22:04:06 crc kubenswrapper[4869]: I0130 22:04:06.777025 4869 generic.go:334] "Generic (PLEG): container finished" podID="3baac441-39df-4456-a3dd-2bf91773c4aa" containerID="3e6caf84787a54d79e5bf12ae642db88009e24419803d6a191b5375af5acbe69" exitCode=0 Jan 30 22:04:06 crc kubenswrapper[4869]: I0130 22:04:06.777116 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn" event={"ID":"3baac441-39df-4456-a3dd-2bf91773c4aa","Type":"ContainerDied","Data":"3e6caf84787a54d79e5bf12ae642db88009e24419803d6a191b5375af5acbe69"} Jan 30 22:04:06 crc kubenswrapper[4869]: I0130 22:04:06.778764 4869 generic.go:334] "Generic (PLEG): container finished" podID="bd81195d-554e-4f5f-8aae-4513a7c31542" containerID="1e1984c3f29e264a9568411e7643d013e896efadf4da3caebab064a8302bf46b" exitCode=0 Jan 30 22:04:06 crc kubenswrapper[4869]: I0130 22:04:06.778799 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-lpvxm" event={"ID":"bd81195d-554e-4f5f-8aae-4513a7c31542","Type":"ContainerDied","Data":"1e1984c3f29e264a9568411e7643d013e896efadf4da3caebab064a8302bf46b"} Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.071273 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn" Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.076564 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-lpvxm" Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.168333 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfkrc\" (UniqueName: \"kubernetes.io/projected/3baac441-39df-4456-a3dd-2bf91773c4aa-kube-api-access-qfkrc\") pod \"3baac441-39df-4456-a3dd-2bf91773c4aa\" (UID: \"3baac441-39df-4456-a3dd-2bf91773c4aa\") " Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.168416 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwxhp\" (UniqueName: \"kubernetes.io/projected/bd81195d-554e-4f5f-8aae-4513a7c31542-kube-api-access-rwxhp\") pod \"bd81195d-554e-4f5f-8aae-4513a7c31542\" (UID: \"bd81195d-554e-4f5f-8aae-4513a7c31542\") " Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.168476 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3baac441-39df-4456-a3dd-2bf91773c4aa-operator-scripts\") pod \"3baac441-39df-4456-a3dd-2bf91773c4aa\" (UID: \"3baac441-39df-4456-a3dd-2bf91773c4aa\") " Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.168609 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd81195d-554e-4f5f-8aae-4513a7c31542-operator-scripts\") pod \"bd81195d-554e-4f5f-8aae-4513a7c31542\" (UID: \"bd81195d-554e-4f5f-8aae-4513a7c31542\") " Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.169757 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd81195d-554e-4f5f-8aae-4513a7c31542-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd81195d-554e-4f5f-8aae-4513a7c31542" (UID: "bd81195d-554e-4f5f-8aae-4513a7c31542"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.169855 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3baac441-39df-4456-a3dd-2bf91773c4aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3baac441-39df-4456-a3dd-2bf91773c4aa" (UID: "3baac441-39df-4456-a3dd-2bf91773c4aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.180219 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd81195d-554e-4f5f-8aae-4513a7c31542-kube-api-access-rwxhp" (OuterVolumeSpecName: "kube-api-access-rwxhp") pod "bd81195d-554e-4f5f-8aae-4513a7c31542" (UID: "bd81195d-554e-4f5f-8aae-4513a7c31542"). InnerVolumeSpecName "kube-api-access-rwxhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.180336 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3baac441-39df-4456-a3dd-2bf91773c4aa-kube-api-access-qfkrc" (OuterVolumeSpecName: "kube-api-access-qfkrc") pod "3baac441-39df-4456-a3dd-2bf91773c4aa" (UID: "3baac441-39df-4456-a3dd-2bf91773c4aa"). InnerVolumeSpecName "kube-api-access-qfkrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.269748 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfkrc\" (UniqueName: \"kubernetes.io/projected/3baac441-39df-4456-a3dd-2bf91773c4aa-kube-api-access-qfkrc\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.269784 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwxhp\" (UniqueName: \"kubernetes.io/projected/bd81195d-554e-4f5f-8aae-4513a7c31542-kube-api-access-rwxhp\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.269795 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3baac441-39df-4456-a3dd-2bf91773c4aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.269803 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd81195d-554e-4f5f-8aae-4513a7c31542-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.793878 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn" event={"ID":"3baac441-39df-4456-a3dd-2bf91773c4aa","Type":"ContainerDied","Data":"2c68cdff3e55e0f389390b44e80b4986d5db4f6058095bbdd10e8837d9b20303"} Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.793949 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c68cdff3e55e0f389390b44e80b4986d5db4f6058095bbdd10e8837d9b20303" Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.793971 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn" Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.795600 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-lpvxm" event={"ID":"bd81195d-554e-4f5f-8aae-4513a7c31542","Type":"ContainerDied","Data":"b3e6a4c066da99223e102f24b09215524303b01d197ec106c0c8ba67fbe13e27"} Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.795647 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3e6a4c066da99223e102f24b09215524303b01d197ec106c0c8ba67fbe13e27" Jan 30 22:04:08 crc kubenswrapper[4869]: I0130 22:04:08.795654 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-lpvxm" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.549261 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-7rs8q"] Jan 30 22:04:13 crc kubenswrapper[4869]: E0130 22:04:13.573592 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e985d3-f817-41ac-919d-9625124f4fcd" containerName="probe" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.573629 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e985d3-f817-41ac-919d-9625124f4fcd" containerName="probe" Jan 30 22:04:13 crc kubenswrapper[4869]: E0130 22:04:13.573640 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd81195d-554e-4f5f-8aae-4513a7c31542" containerName="mariadb-database-create" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.573647 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd81195d-554e-4f5f-8aae-4513a7c31542" containerName="mariadb-database-create" Jan 30 22:04:13 crc kubenswrapper[4869]: E0130 22:04:13.573665 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e985d3-f817-41ac-919d-9625124f4fcd" containerName="cinder-backup" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.573673 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e985d3-f817-41ac-919d-9625124f4fcd" containerName="cinder-backup" Jan 30 22:04:13 crc kubenswrapper[4869]: E0130 22:04:13.573698 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3baac441-39df-4456-a3dd-2bf91773c4aa" containerName="mariadb-account-create-update" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.573704 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3baac441-39df-4456-a3dd-2bf91773c4aa" containerName="mariadb-account-create-update" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.573874 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd81195d-554e-4f5f-8aae-4513a7c31542" containerName="mariadb-database-create" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.573888 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3baac441-39df-4456-a3dd-2bf91773c4aa" containerName="mariadb-account-create-update" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.573929 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="43e985d3-f817-41ac-919d-9625124f4fcd" containerName="cinder-backup" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.573940 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="43e985d3-f817-41ac-919d-9625124f4fcd" containerName="probe" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.574436 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.574535 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-7rs8q"] Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.577831 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"combined-ca-bundle" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.577842 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-cinder-dockercfg-hdcm8" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.578408 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-config-data" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.578804 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scripts" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.750637 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-config-data\") pod \"cinder-db-sync-7rs8q\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.750682 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-combined-ca-bundle\") pod \"cinder-db-sync-7rs8q\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.750710 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-db-sync-config-data\") pod \"cinder-db-sync-7rs8q\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.750770 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-scripts\") pod \"cinder-db-sync-7rs8q\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.750803 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqxhd\" (UniqueName: \"kubernetes.io/projected/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-kube-api-access-wqxhd\") pod \"cinder-db-sync-7rs8q\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.750883 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-etc-machine-id\") pod \"cinder-db-sync-7rs8q\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.851734 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-scripts\") pod \"cinder-db-sync-7rs8q\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.851790 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqxhd\" (UniqueName: \"kubernetes.io/projected/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-kube-api-access-wqxhd\") pod \"cinder-db-sync-7rs8q\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.851824 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-etc-machine-id\") pod \"cinder-db-sync-7rs8q\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.851883 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-config-data\") pod \"cinder-db-sync-7rs8q\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.851931 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-combined-ca-bundle\") pod \"cinder-db-sync-7rs8q\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.851960 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-db-sync-config-data\") pod \"cinder-db-sync-7rs8q\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.852067 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-etc-machine-id\") pod \"cinder-db-sync-7rs8q\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.857244 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-combined-ca-bundle\") pod \"cinder-db-sync-7rs8q\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.858638 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-scripts\") pod \"cinder-db-sync-7rs8q\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.859283 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-config-data\") pod \"cinder-db-sync-7rs8q\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.863616 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-db-sync-config-data\") pod \"cinder-db-sync-7rs8q\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.871721 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqxhd\" (UniqueName: \"kubernetes.io/projected/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-kube-api-access-wqxhd\") pod \"cinder-db-sync-7rs8q\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:13 crc kubenswrapper[4869]: I0130 22:04:13.887491 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:16 crc kubenswrapper[4869]: I0130 22:04:16.712939 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-7rs8q"] Jan 30 22:04:16 crc kubenswrapper[4869]: I0130 22:04:16.853998 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" event={"ID":"0fdf0be0-e87f-4a5f-af90-905de1b95f1d","Type":"ContainerStarted","Data":"4635d7231e005fc9fb1d7987356aa825c563ace1c9554e253151204dacf57b1b"} Jan 30 22:04:17 crc kubenswrapper[4869]: I0130 22:04:17.863503 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" event={"ID":"0fdf0be0-e87f-4a5f-af90-905de1b95f1d","Type":"ContainerStarted","Data":"681940ad68535c552ce6434a887f71e1eb9fc67fd5d9ee409b77190217e3ba77"} Jan 30 22:04:17 crc kubenswrapper[4869]: I0130 22:04:17.881991 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" podStartSLOduration=4.88197691 podStartE2EDuration="4.88197691s" podCreationTimestamp="2026-01-30 22:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:17.8803783 +0000 UTC m=+1258.766136335" watchObservedRunningTime="2026-01-30 22:04:17.88197691 +0000 UTC m=+1258.767734935" Jan 30 22:04:19 crc kubenswrapper[4869]: I0130 22:04:19.884016 4869 generic.go:334] "Generic (PLEG): container finished" podID="0fdf0be0-e87f-4a5f-af90-905de1b95f1d" containerID="681940ad68535c552ce6434a887f71e1eb9fc67fd5d9ee409b77190217e3ba77" exitCode=0 Jan 30 22:04:19 crc kubenswrapper[4869]: I0130 22:04:19.889163 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" event={"ID":"0fdf0be0-e87f-4a5f-af90-905de1b95f1d","Type":"ContainerDied","Data":"681940ad68535c552ce6434a887f71e1eb9fc67fd5d9ee409b77190217e3ba77"} Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.142651 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.248141 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-db-sync-config-data\") pod \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.248278 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-scripts\") pod \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.248341 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-etc-machine-id\") pod \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.248431 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-combined-ca-bundle\") pod \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.248486 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-config-data\") pod \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.248525 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqxhd\" (UniqueName: \"kubernetes.io/projected/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-kube-api-access-wqxhd\") pod \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\" (UID: \"0fdf0be0-e87f-4a5f-af90-905de1b95f1d\") " Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.248590 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0fdf0be0-e87f-4a5f-af90-905de1b95f1d" (UID: "0fdf0be0-e87f-4a5f-af90-905de1b95f1d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.248841 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.255587 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-scripts" (OuterVolumeSpecName: "scripts") pod "0fdf0be0-e87f-4a5f-af90-905de1b95f1d" (UID: "0fdf0be0-e87f-4a5f-af90-905de1b95f1d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.255594 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-kube-api-access-wqxhd" (OuterVolumeSpecName: "kube-api-access-wqxhd") pod "0fdf0be0-e87f-4a5f-af90-905de1b95f1d" (UID: "0fdf0be0-e87f-4a5f-af90-905de1b95f1d"). InnerVolumeSpecName "kube-api-access-wqxhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.255670 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0fdf0be0-e87f-4a5f-af90-905de1b95f1d" (UID: "0fdf0be0-e87f-4a5f-af90-905de1b95f1d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.272548 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fdf0be0-e87f-4a5f-af90-905de1b95f1d" (UID: "0fdf0be0-e87f-4a5f-af90-905de1b95f1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.283923 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-config-data" (OuterVolumeSpecName: "config-data") pod "0fdf0be0-e87f-4a5f-af90-905de1b95f1d" (UID: "0fdf0be0-e87f-4a5f-af90-905de1b95f1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.349737 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqxhd\" (UniqueName: \"kubernetes.io/projected/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-kube-api-access-wqxhd\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.349777 4869 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.349790 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.349801 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.349811 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdf0be0-e87f-4a5f-af90-905de1b95f1d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.902435 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" event={"ID":"0fdf0be0-e87f-4a5f-af90-905de1b95f1d","Type":"ContainerDied","Data":"4635d7231e005fc9fb1d7987356aa825c563ace1c9554e253151204dacf57b1b"} Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.902469 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-7rs8q" Jan 30 22:04:21 crc kubenswrapper[4869]: I0130 22:04:21.902488 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4635d7231e005fc9fb1d7987356aa825c563ace1c9554e253151204dacf57b1b" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.222498 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 30 22:04:22 crc kubenswrapper[4869]: E0130 22:04:22.222757 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdf0be0-e87f-4a5f-af90-905de1b95f1d" containerName="cinder-db-sync" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.222769 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdf0be0-e87f-4a5f-af90-905de1b95f1d" containerName="cinder-db-sync" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.222936 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdf0be0-e87f-4a5f-af90-905de1b95f1d" containerName="cinder-db-sync" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.223985 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.231453 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"combined-ca-bundle" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.232465 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scripts" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.232642 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-cinder-dockercfg-hdcm8" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.232918 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scheduler-config-data" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.237647 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-config-data" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.240501 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.241775 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.247720 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-backup-config-data" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.250436 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.251829 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.255454 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-volume-volume1-config-data" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.259095 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.260644 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-config-data\") pod \"cinder-scheduler-0\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.260703 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.260730 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe46942a-6439-4fdb-a1cf-3db1686ae85b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.260754 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.260809 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grr56\" (UniqueName: \"kubernetes.io/projected/fe46942a-6439-4fdb-a1cf-3db1686ae85b-kube-api-access-grr56\") pod \"cinder-scheduler-0\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.260845 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-scripts\") pod \"cinder-scheduler-0\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.275426 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.293032 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362344 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362435 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362470 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362505 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grr56\" (UniqueName: \"kubernetes.io/projected/fe46942a-6439-4fdb-a1cf-3db1686ae85b-kube-api-access-grr56\") pod \"cinder-scheduler-0\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362531 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-config-data-custom\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362557 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-run\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362582 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362609 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362677 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-scripts\") pod \"cinder-scheduler-0\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362714 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-scripts\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362737 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362760 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-run\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362778 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362797 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362818 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-etc-nvme\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362849 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-config-data\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362905 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362936 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362966 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.362991 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-lib-modules\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.363011 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x42xr\" (UniqueName: \"kubernetes.io/projected/97c08a25-b1da-4776-ad22-5f474151f2e6-kube-api-access-x42xr\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.363033 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-config-data\") pod \"cinder-scheduler-0\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.363052 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-dev\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.363072 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.363097 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe46942a-6439-4fdb-a1cf-3db1686ae85b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.363126 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.363147 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-dev\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.363177 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.363196 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.363215 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-sys\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.363237 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.363259 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.363292 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.363333 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78mzb\" (UniqueName: \"kubernetes.io/projected/73299ef4-0062-46e4-a329-078037f1ef33-kube-api-access-78mzb\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.363353 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-sys\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.363386 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.363461 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe46942a-6439-4fdb-a1cf-3db1686ae85b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.366295 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-scripts\") pod \"cinder-scheduler-0\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.367488 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-config-data\") pod \"cinder-scheduler-0\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.367808 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.368350 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.382623 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grr56\" (UniqueName: \"kubernetes.io/projected/fe46942a-6439-4fdb-a1cf-3db1686ae85b-kube-api-access-grr56\") pod \"cinder-scheduler-0\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.397622 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.398942 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.401551 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-api-config-data" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.401797 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cert-cinder-public-svc" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.401762 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cert-cinder-internal-svc" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.412220 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.465431 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-scripts\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.465504 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.465540 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-run\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.465585 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.465609 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.465631 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-etc-nvme\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.465686 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-config-data\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.465707 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.465751 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.465784 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4fnf\" (UniqueName: \"kubernetes.io/projected/a6955a5a-a679-4af8-8f79-0b849abdcb4f-kube-api-access-h4fnf\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.465825 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-scripts\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.465848 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.465887 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-lib-modules\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.465942 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x42xr\" (UniqueName: \"kubernetes.io/projected/97c08a25-b1da-4776-ad22-5f474151f2e6-kube-api-access-x42xr\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466025 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-dev\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466090 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-dev\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466136 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466176 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466220 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466241 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-sys\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466263 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466298 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466316 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78mzb\" (UniqueName: \"kubernetes.io/projected/73299ef4-0062-46e4-a329-078037f1ef33-kube-api-access-78mzb\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466332 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-sys\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466351 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466387 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6955a5a-a679-4af8-8f79-0b849abdcb4f-logs\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466405 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466429 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466466 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466483 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-config-data\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466505 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466540 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466555 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6955a5a-a679-4af8-8f79-0b849abdcb4f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466575 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-config-data-custom\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466609 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-run\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466630 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466653 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466705 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466734 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-config-data-custom\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.466950 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-dev\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.467014 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.467088 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.467123 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-lib-modules\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.467237 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-sys\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.467334 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.467450 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-dev\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.467490 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.467529 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-sys\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.467538 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.467572 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-run\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.467627 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.468000 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.468425 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-run\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.469005 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.469095 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.469142 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.469182 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.469249 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.469297 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-etc-nvme\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.472380 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.472555 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-config-data-custom\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.474376 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.475004 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.475143 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.476379 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-scripts\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.477317 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-config-data\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.478370 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.487885 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78mzb\" (UniqueName: \"kubernetes.io/projected/73299ef4-0062-46e4-a329-078037f1ef33-kube-api-access-78mzb\") pod \"cinder-volume-volume1-0\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.499283 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x42xr\" (UniqueName: \"kubernetes.io/projected/97c08a25-b1da-4776-ad22-5f474151f2e6-kube-api-access-x42xr\") pod \"cinder-backup-0\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.537160 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.559542 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.569472 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4fnf\" (UniqueName: \"kubernetes.io/projected/a6955a5a-a679-4af8-8f79-0b849abdcb4f-kube-api-access-h4fnf\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.569524 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-scripts\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.569576 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.569633 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6955a5a-a679-4af8-8f79-0b849abdcb4f-logs\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.569654 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.569684 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-config-data\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.571075 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6955a5a-a679-4af8-8f79-0b849abdcb4f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.570988 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6955a5a-a679-4af8-8f79-0b849abdcb4f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.571494 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6955a5a-a679-4af8-8f79-0b849abdcb4f-logs\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.571546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.571615 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-config-data-custom\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.572825 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.577363 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-scripts\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.577457 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.577548 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.577823 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.578154 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-config-data-custom\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.579134 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-config-data\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.597276 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4fnf\" (UniqueName: \"kubernetes.io/projected/a6955a5a-a679-4af8-8f79-0b849abdcb4f-kube-api-access-h4fnf\") pod \"cinder-api-0\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:22 crc kubenswrapper[4869]: I0130 22:04:22.747322 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:23 crc kubenswrapper[4869]: I0130 22:04:23.010313 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 30 22:04:23 crc kubenswrapper[4869]: I0130 22:04:23.025909 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 30 22:04:23 crc kubenswrapper[4869]: W0130 22:04:23.027446 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6955a5a_a679_4af8_8f79_0b849abdcb4f.slice/crio-5d87686d8237d63aa796cd13814ea5103a587df078ccf36e9821501512aa36dd WatchSource:0}: Error finding container 5d87686d8237d63aa796cd13814ea5103a587df078ccf36e9821501512aa36dd: Status 404 returned error can't find the container with id 5d87686d8237d63aa796cd13814ea5103a587df078ccf36e9821501512aa36dd Jan 30 22:04:23 crc kubenswrapper[4869]: I0130 22:04:23.114233 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 30 22:04:23 crc kubenswrapper[4869]: W0130 22:04:23.134521 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97c08a25_b1da_4776_ad22_5f474151f2e6.slice/crio-6ba9a7a4767e7672cd825af26edba370b564ba74bbfea727ef68d081e5f545da WatchSource:0}: Error finding container 6ba9a7a4767e7672cd825af26edba370b564ba74bbfea727ef68d081e5f545da: Status 404 returned error can't find the container with id 6ba9a7a4767e7672cd825af26edba370b564ba74bbfea727ef68d081e5f545da Jan 30 22:04:23 crc kubenswrapper[4869]: I0130 22:04:23.159947 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 30 22:04:23 crc kubenswrapper[4869]: I0130 22:04:23.922651 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"fe46942a-6439-4fdb-a1cf-3db1686ae85b","Type":"ContainerStarted","Data":"de4373d4427bb99305dcfaa5765d5051baa2e853f03446cf0852b6bee9bad816"} Jan 30 22:04:23 crc kubenswrapper[4869]: I0130 22:04:23.923161 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"fe46942a-6439-4fdb-a1cf-3db1686ae85b","Type":"ContainerStarted","Data":"a5f14c920e97ef855e05783cc40148494e51a9bf9f89120b66325d2ddf631b79"} Jan 30 22:04:23 crc kubenswrapper[4869]: I0130 22:04:23.932185 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"73299ef4-0062-46e4-a329-078037f1ef33","Type":"ContainerStarted","Data":"9d1b2bcc68a832b46efb38278d4216bd6c2ed6bad251f5a1369150acd88f7090"} Jan 30 22:04:23 crc kubenswrapper[4869]: I0130 22:04:23.932436 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"73299ef4-0062-46e4-a329-078037f1ef33","Type":"ContainerStarted","Data":"3676c1f06bfbf4275c2688835979209fcd5c8aede7c6f951b52a8abdc5d2410c"} Jan 30 22:04:23 crc kubenswrapper[4869]: I0130 22:04:23.932514 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"73299ef4-0062-46e4-a329-078037f1ef33","Type":"ContainerStarted","Data":"a6677262d3ae53f45575e1e86d517948f7c47210ce30377dee8b2474a9fd0bb3"} Jan 30 22:04:23 crc kubenswrapper[4869]: I0130 22:04:23.943135 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"97c08a25-b1da-4776-ad22-5f474151f2e6","Type":"ContainerStarted","Data":"bd96eebc3f75ff6c352f30d97b60d4479ec4ba8b5644f88fe62e1f98e8819efd"} Jan 30 22:04:23 crc kubenswrapper[4869]: I0130 22:04:23.943192 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"97c08a25-b1da-4776-ad22-5f474151f2e6","Type":"ContainerStarted","Data":"bc4a063ae0b88689083ee17f00831ac0e61f02e570a32f62bb23bdb6f6331483"} Jan 30 22:04:23 crc kubenswrapper[4869]: I0130 22:04:23.943205 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"97c08a25-b1da-4776-ad22-5f474151f2e6","Type":"ContainerStarted","Data":"6ba9a7a4767e7672cd825af26edba370b564ba74bbfea727ef68d081e5f545da"} Jan 30 22:04:23 crc kubenswrapper[4869]: I0130 22:04:23.950647 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"a6955a5a-a679-4af8-8f79-0b849abdcb4f","Type":"ContainerStarted","Data":"6b891da32dae8ee96fd5ba92141922a42aebd27dae4e2b5460da8a39b7e4a886"} Jan 30 22:04:23 crc kubenswrapper[4869]: I0130 22:04:23.950706 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"a6955a5a-a679-4af8-8f79-0b849abdcb4f","Type":"ContainerStarted","Data":"5d87686d8237d63aa796cd13814ea5103a587df078ccf36e9821501512aa36dd"} Jan 30 22:04:23 crc kubenswrapper[4869]: I0130 22:04:23.963881 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podStartSLOduration=1.96386667 podStartE2EDuration="1.96386667s" podCreationTimestamp="2026-01-30 22:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:23.960323299 +0000 UTC m=+1264.846081324" watchObservedRunningTime="2026-01-30 22:04:23.96386667 +0000 UTC m=+1264.849624695" Jan 30 22:04:23 crc kubenswrapper[4869]: I0130 22:04:23.992869 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-backup-0" podStartSLOduration=1.992848747 podStartE2EDuration="1.992848747s" podCreationTimestamp="2026-01-30 22:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:23.984044142 +0000 UTC m=+1264.869802167" watchObservedRunningTime="2026-01-30 22:04:23.992848747 +0000 UTC m=+1264.878606772" Jan 30 22:04:24 crc kubenswrapper[4869]: I0130 22:04:24.959781 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"a6955a5a-a679-4af8-8f79-0b849abdcb4f","Type":"ContainerStarted","Data":"4f973521da247eda0b7b0afd610d6529feef403d778d35a089630bede6fcfc9e"} Jan 30 22:04:24 crc kubenswrapper[4869]: I0130 22:04:24.960382 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:24 crc kubenswrapper[4869]: I0130 22:04:24.961358 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"fe46942a-6439-4fdb-a1cf-3db1686ae85b","Type":"ContainerStarted","Data":"2553451c001f7f14ccefb67ebdb2cfc4e5f3237a9f51778617d9cc4c56fb04c6"} Jan 30 22:04:25 crc kubenswrapper[4869]: I0130 22:04:25.016404 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-0" podStartSLOduration=3.016381856 podStartE2EDuration="3.016381856s" podCreationTimestamp="2026-01-30 22:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:24.989874055 +0000 UTC m=+1265.875632090" watchObservedRunningTime="2026-01-30 22:04:25.016381856 +0000 UTC m=+1265.902139871" Jan 30 22:04:25 crc kubenswrapper[4869]: I0130 22:04:25.018225 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-scheduler-0" podStartSLOduration=3.018215943 podStartE2EDuration="3.018215943s" podCreationTimestamp="2026-01-30 22:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:25.014307661 +0000 UTC m=+1265.900065686" watchObservedRunningTime="2026-01-30 22:04:25.018215943 +0000 UTC m=+1265.903973968" Jan 30 22:04:25 crc kubenswrapper[4869]: E0130 22:04:25.657682 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73299ef4_0062_46e4_a329_078037f1ef33.slice/crio-conmon-9d1b2bcc68a832b46efb38278d4216bd6c2ed6bad251f5a1369150acd88f7090.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:04:25 crc kubenswrapper[4869]: I0130 22:04:25.971491 4869 generic.go:334] "Generic (PLEG): container finished" podID="73299ef4-0062-46e4-a329-078037f1ef33" containerID="9d1b2bcc68a832b46efb38278d4216bd6c2ed6bad251f5a1369150acd88f7090" exitCode=1 Jan 30 22:04:25 crc kubenswrapper[4869]: I0130 22:04:25.971565 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"73299ef4-0062-46e4-a329-078037f1ef33","Type":"ContainerDied","Data":"9d1b2bcc68a832b46efb38278d4216bd6c2ed6bad251f5a1369150acd88f7090"} Jan 30 22:04:25 crc kubenswrapper[4869]: I0130 22:04:25.973297 4869 scope.go:117] "RemoveContainer" containerID="9d1b2bcc68a832b46efb38278d4216bd6c2ed6bad251f5a1369150acd88f7090" Jan 30 22:04:26 crc kubenswrapper[4869]: I0130 22:04:26.981411 4869 generic.go:334] "Generic (PLEG): container finished" podID="73299ef4-0062-46e4-a329-078037f1ef33" containerID="3676c1f06bfbf4275c2688835979209fcd5c8aede7c6f951b52a8abdc5d2410c" exitCode=1 Jan 30 22:04:26 crc kubenswrapper[4869]: I0130 22:04:26.981509 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"73299ef4-0062-46e4-a329-078037f1ef33","Type":"ContainerDied","Data":"3676c1f06bfbf4275c2688835979209fcd5c8aede7c6f951b52a8abdc5d2410c"} Jan 30 22:04:26 crc kubenswrapper[4869]: I0130 22:04:26.981843 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"73299ef4-0062-46e4-a329-078037f1ef33","Type":"ContainerStarted","Data":"8da988f07fe8049664b374efe74dd964fdf796cfbd3c1bd1dce72c2eee4d263a"} Jan 30 22:04:26 crc kubenswrapper[4869]: I0130 22:04:26.982391 4869 scope.go:117] "RemoveContainer" containerID="3676c1f06bfbf4275c2688835979209fcd5c8aede7c6f951b52a8abdc5d2410c" Jan 30 22:04:27 crc kubenswrapper[4869]: I0130 22:04:27.537536 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:27 crc kubenswrapper[4869]: I0130 22:04:27.560752 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:27 crc kubenswrapper[4869]: I0130 22:04:27.573794 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:27 crc kubenswrapper[4869]: I0130 22:04:27.573831 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:27 crc kubenswrapper[4869]: I0130 22:04:27.996024 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"73299ef4-0062-46e4-a329-078037f1ef33","Type":"ContainerStarted","Data":"4d71b98907b306fef7129f27073070194839811cb77cb2abbcad4629b93cbbe7"} Jan 30 22:04:27 crc kubenswrapper[4869]: I0130 22:04:27.997017 4869 scope.go:117] "RemoveContainer" containerID="8da988f07fe8049664b374efe74dd964fdf796cfbd3c1bd1dce72c2eee4d263a" Jan 30 22:04:27 crc kubenswrapper[4869]: E0130 22:04:27.997275 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(73299ef4-0062-46e4-a329-078037f1ef33)\"" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="73299ef4-0062-46e4-a329-078037f1ef33" Jan 30 22:04:29 crc kubenswrapper[4869]: I0130 22:04:29.006422 4869 generic.go:334] "Generic (PLEG): container finished" podID="73299ef4-0062-46e4-a329-078037f1ef33" containerID="8da988f07fe8049664b374efe74dd964fdf796cfbd3c1bd1dce72c2eee4d263a" exitCode=1 Jan 30 22:04:29 crc kubenswrapper[4869]: I0130 22:04:29.006475 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"73299ef4-0062-46e4-a329-078037f1ef33","Type":"ContainerDied","Data":"8da988f07fe8049664b374efe74dd964fdf796cfbd3c1bd1dce72c2eee4d263a"} Jan 30 22:04:29 crc kubenswrapper[4869]: I0130 22:04:29.006517 4869 scope.go:117] "RemoveContainer" containerID="9d1b2bcc68a832b46efb38278d4216bd6c2ed6bad251f5a1369150acd88f7090" Jan 30 22:04:29 crc kubenswrapper[4869]: I0130 22:04:29.007256 4869 scope.go:117] "RemoveContainer" containerID="8da988f07fe8049664b374efe74dd964fdf796cfbd3c1bd1dce72c2eee4d263a" Jan 30 22:04:29 crc kubenswrapper[4869]: E0130 22:04:29.007512 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(73299ef4-0062-46e4-a329-078037f1ef33)\"" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="73299ef4-0062-46e4-a329-078037f1ef33" Jan 30 22:04:30 crc kubenswrapper[4869]: I0130 22:04:30.015135 4869 generic.go:334] "Generic (PLEG): container finished" podID="73299ef4-0062-46e4-a329-078037f1ef33" containerID="4d71b98907b306fef7129f27073070194839811cb77cb2abbcad4629b93cbbe7" exitCode=1 Jan 30 22:04:30 crc kubenswrapper[4869]: I0130 22:04:30.015175 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"73299ef4-0062-46e4-a329-078037f1ef33","Type":"ContainerDied","Data":"4d71b98907b306fef7129f27073070194839811cb77cb2abbcad4629b93cbbe7"} Jan 30 22:04:30 crc kubenswrapper[4869]: I0130 22:04:30.015591 4869 scope.go:117] "RemoveContainer" containerID="3676c1f06bfbf4275c2688835979209fcd5c8aede7c6f951b52a8abdc5d2410c" Jan 30 22:04:30 crc kubenswrapper[4869]: I0130 22:04:30.016532 4869 scope.go:117] "RemoveContainer" containerID="4d71b98907b306fef7129f27073070194839811cb77cb2abbcad4629b93cbbe7" Jan 30 22:04:30 crc kubenswrapper[4869]: I0130 22:04:30.016807 4869 scope.go:117] "RemoveContainer" containerID="8da988f07fe8049664b374efe74dd964fdf796cfbd3c1bd1dce72c2eee4d263a" Jan 30 22:04:30 crc kubenswrapper[4869]: E0130 22:04:30.017243 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(73299ef4-0062-46e4-a329-078037f1ef33)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(73299ef4-0062-46e4-a329-078037f1ef33)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="73299ef4-0062-46e4-a329-078037f1ef33" Jan 30 22:04:31 crc kubenswrapper[4869]: I0130 22:04:31.026279 4869 scope.go:117] "RemoveContainer" containerID="4d71b98907b306fef7129f27073070194839811cb77cb2abbcad4629b93cbbe7" Jan 30 22:04:31 crc kubenswrapper[4869]: I0130 22:04:31.026303 4869 scope.go:117] "RemoveContainer" containerID="8da988f07fe8049664b374efe74dd964fdf796cfbd3c1bd1dce72c2eee4d263a" Jan 30 22:04:31 crc kubenswrapper[4869]: E0130 22:04:31.026551 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(73299ef4-0062-46e4-a329-078037f1ef33)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(73299ef4-0062-46e4-a329-078037f1ef33)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="73299ef4-0062-46e4-a329-078037f1ef33" Jan 30 22:04:31 crc kubenswrapper[4869]: I0130 22:04:31.573863 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:31 crc kubenswrapper[4869]: I0130 22:04:31.990663 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:04:31 crc kubenswrapper[4869]: I0130 22:04:31.991090 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:04:31 crc kubenswrapper[4869]: I0130 22:04:31.991244 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 22:04:31 crc kubenswrapper[4869]: I0130 22:04:31.992043 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb268ebdb9eedb26a9652217f7a5aa752de4c3f089acc9c91036b9bb0160a969"} pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:04:31 crc kubenswrapper[4869]: I0130 22:04:31.992165 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" containerID="cri-o://fb268ebdb9eedb26a9652217f7a5aa752de4c3f089acc9c91036b9bb0160a969" gracePeriod=600 Jan 30 22:04:32 crc kubenswrapper[4869]: I0130 22:04:32.032701 4869 scope.go:117] "RemoveContainer" containerID="4d71b98907b306fef7129f27073070194839811cb77cb2abbcad4629b93cbbe7" Jan 30 22:04:32 crc kubenswrapper[4869]: I0130 22:04:32.033048 4869 scope.go:117] "RemoveContainer" containerID="8da988f07fe8049664b374efe74dd964fdf796cfbd3c1bd1dce72c2eee4d263a" Jan 30 22:04:32 crc kubenswrapper[4869]: E0130 22:04:32.033362 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(73299ef4-0062-46e4-a329-078037f1ef33)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(73299ef4-0062-46e4-a329-078037f1ef33)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="73299ef4-0062-46e4-a329-078037f1ef33" Jan 30 22:04:32 crc kubenswrapper[4869]: I0130 22:04:32.573677 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:32 crc kubenswrapper[4869]: I0130 22:04:32.574068 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:32 crc kubenswrapper[4869]: I0130 22:04:32.776243 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:32 crc kubenswrapper[4869]: I0130 22:04:32.819662 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:33 crc kubenswrapper[4869]: I0130 22:04:33.040243 4869 generic.go:334] "Generic (PLEG): container finished" podID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerID="fb268ebdb9eedb26a9652217f7a5aa752de4c3f089acc9c91036b9bb0160a969" exitCode=0 Jan 30 22:04:33 crc kubenswrapper[4869]: I0130 22:04:33.040266 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" event={"ID":"b6fc0664-5e80-440d-a6e8-4189cdf5c500","Type":"ContainerDied","Data":"fb268ebdb9eedb26a9652217f7a5aa752de4c3f089acc9c91036b9bb0160a969"} Jan 30 22:04:33 crc kubenswrapper[4869]: I0130 22:04:33.040573 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" event={"ID":"b6fc0664-5e80-440d-a6e8-4189cdf5c500","Type":"ContainerStarted","Data":"7ffcbacd9fdeb4349443b9f613e5c78dd198a3778ae8b5b04d896ffb86351bb7"} Jan 30 22:04:33 crc kubenswrapper[4869]: I0130 22:04:33.040595 4869 scope.go:117] "RemoveContainer" containerID="6192238ff2265f4c3d1b0ce1e53fb7be4490d1247ceca7a0a3ab91e6567a4b90" Jan 30 22:04:33 crc kubenswrapper[4869]: I0130 22:04:33.041139 4869 scope.go:117] "RemoveContainer" containerID="4d71b98907b306fef7129f27073070194839811cb77cb2abbcad4629b93cbbe7" Jan 30 22:04:33 crc kubenswrapper[4869]: I0130 22:04:33.041157 4869 scope.go:117] "RemoveContainer" containerID="8da988f07fe8049664b374efe74dd964fdf796cfbd3c1bd1dce72c2eee4d263a" Jan 30 22:04:33 crc kubenswrapper[4869]: E0130 22:04:33.041392 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(73299ef4-0062-46e4-a329-078037f1ef33)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(73299ef4-0062-46e4-a329-078037f1ef33)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="73299ef4-0062-46e4-a329-078037f1ef33" Jan 30 22:04:34 crc kubenswrapper[4869]: I0130 22:04:34.779930 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:35 crc kubenswrapper[4869]: I0130 22:04:35.989507 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-7rs8q"] Jan 30 22:04:35 crc kubenswrapper[4869]: I0130 22:04:35.995310 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-7rs8q"] Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.021572 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.021906 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-0" podUID="fe46942a-6439-4fdb-a1cf-3db1686ae85b" containerName="cinder-scheduler" containerID="cri-o://de4373d4427bb99305dcfaa5765d5051baa2e853f03446cf0852b6bee9bad816" gracePeriod=30 Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.022003 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-0" podUID="fe46942a-6439-4fdb-a1cf-3db1686ae85b" containerName="probe" containerID="cri-o://2553451c001f7f14ccefb67ebdb2cfc4e5f3237a9f51778617d9cc4c56fb04c6" gracePeriod=30 Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.027785 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.038581 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.038846 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-0" podUID="97c08a25-b1da-4776-ad22-5f474151f2e6" containerName="cinder-backup" containerID="cri-o://bc4a063ae0b88689083ee17f00831ac0e61f02e570a32f62bb23bdb6f6331483" gracePeriod=30 Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.039028 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-0" podUID="97c08a25-b1da-4776-ad22-5f474151f2e6" containerName="probe" containerID="cri-o://bd96eebc3f75ff6c352f30d97b60d4479ec4ba8b5644f88fe62e1f98e8819efd" gracePeriod=30 Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.061525 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinderb570-account-delete-tg9jr"] Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.062322 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinderb570-account-delete-tg9jr" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.069600 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinderb570-account-delete-tg9jr"] Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.094954 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.095431 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="a6955a5a-a679-4af8-8f79-0b849abdcb4f" containerName="cinder-api-log" containerID="cri-o://6b891da32dae8ee96fd5ba92141922a42aebd27dae4e2b5460da8a39b7e4a886" gracePeriod=30 Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.095863 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="a6955a5a-a679-4af8-8f79-0b849abdcb4f" containerName="cinder-api" containerID="cri-o://4f973521da247eda0b7b0afd610d6529feef403d778d35a089630bede6fcfc9e" gracePeriod=30 Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.101105 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="cinder-kuttl-tests/cinder-api-0" podUID="a6955a5a-a679-4af8-8f79-0b849abdcb4f" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.109:8776/healthcheck\": EOF" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.174140 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s5mg\" (UniqueName: \"kubernetes.io/projected/6e9b7b24-bbd9-4f92-bb42-3bfab462e2be-kube-api-access-7s5mg\") pod \"cinderb570-account-delete-tg9jr\" (UID: \"6e9b7b24-bbd9-4f92-bb42-3bfab462e2be\") " pod="cinder-kuttl-tests/cinderb570-account-delete-tg9jr" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.174613 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e9b7b24-bbd9-4f92-bb42-3bfab462e2be-operator-scripts\") pod \"cinderb570-account-delete-tg9jr\" (UID: \"6e9b7b24-bbd9-4f92-bb42-3bfab462e2be\") " pod="cinder-kuttl-tests/cinderb570-account-delete-tg9jr" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.276026 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s5mg\" (UniqueName: \"kubernetes.io/projected/6e9b7b24-bbd9-4f92-bb42-3bfab462e2be-kube-api-access-7s5mg\") pod \"cinderb570-account-delete-tg9jr\" (UID: \"6e9b7b24-bbd9-4f92-bb42-3bfab462e2be\") " pod="cinder-kuttl-tests/cinderb570-account-delete-tg9jr" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.276083 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e9b7b24-bbd9-4f92-bb42-3bfab462e2be-operator-scripts\") pod \"cinderb570-account-delete-tg9jr\" (UID: \"6e9b7b24-bbd9-4f92-bb42-3bfab462e2be\") " pod="cinder-kuttl-tests/cinderb570-account-delete-tg9jr" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.276821 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e9b7b24-bbd9-4f92-bb42-3bfab462e2be-operator-scripts\") pod \"cinderb570-account-delete-tg9jr\" (UID: \"6e9b7b24-bbd9-4f92-bb42-3bfab462e2be\") " pod="cinder-kuttl-tests/cinderb570-account-delete-tg9jr" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.301920 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s5mg\" (UniqueName: \"kubernetes.io/projected/6e9b7b24-bbd9-4f92-bb42-3bfab462e2be-kube-api-access-7s5mg\") pod \"cinderb570-account-delete-tg9jr\" (UID: \"6e9b7b24-bbd9-4f92-bb42-3bfab462e2be\") " pod="cinder-kuttl-tests/cinderb570-account-delete-tg9jr" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.439010 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinderb570-account-delete-tg9jr" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.448606 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.580171 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-config-data-custom\") pod \"73299ef4-0062-46e4-a329-078037f1ef33\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.580238 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-combined-ca-bundle\") pod \"73299ef4-0062-46e4-a329-078037f1ef33\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.580286 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-etc-iscsi\") pod \"73299ef4-0062-46e4-a329-078037f1ef33\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.580466 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "73299ef4-0062-46e4-a329-078037f1ef33" (UID: "73299ef4-0062-46e4-a329-078037f1ef33"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.580591 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-etc-machine-id\") pod \"73299ef4-0062-46e4-a329-078037f1ef33\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.580617 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-etc-nvme\") pod \"73299ef4-0062-46e4-a329-078037f1ef33\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.580666 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "73299ef4-0062-46e4-a329-078037f1ef33" (UID: "73299ef4-0062-46e4-a329-078037f1ef33"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.580716 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-sys\") pod \"73299ef4-0062-46e4-a329-078037f1ef33\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.580768 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-run\") pod \"73299ef4-0062-46e4-a329-078037f1ef33\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.580814 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "73299ef4-0062-46e4-a329-078037f1ef33" (UID: "73299ef4-0062-46e4-a329-078037f1ef33"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.580854 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-run" (OuterVolumeSpecName: "run") pod "73299ef4-0062-46e4-a329-078037f1ef33" (UID: "73299ef4-0062-46e4-a329-078037f1ef33"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.580830 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-sys" (OuterVolumeSpecName: "sys") pod "73299ef4-0062-46e4-a329-078037f1ef33" (UID: "73299ef4-0062-46e4-a329-078037f1ef33"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.580796 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78mzb\" (UniqueName: \"kubernetes.io/projected/73299ef4-0062-46e4-a329-078037f1ef33-kube-api-access-78mzb\") pod \"73299ef4-0062-46e4-a329-078037f1ef33\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.581277 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-var-locks-brick\") pod \"73299ef4-0062-46e4-a329-078037f1ef33\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.581307 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-var-lib-cinder\") pod \"73299ef4-0062-46e4-a329-078037f1ef33\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.581336 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-var-locks-cinder\") pod \"73299ef4-0062-46e4-a329-078037f1ef33\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.581365 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-config-data\") pod \"73299ef4-0062-46e4-a329-078037f1ef33\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.581368 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "73299ef4-0062-46e4-a329-078037f1ef33" (UID: "73299ef4-0062-46e4-a329-078037f1ef33"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.581397 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-dev\") pod \"73299ef4-0062-46e4-a329-078037f1ef33\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.581404 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "73299ef4-0062-46e4-a329-078037f1ef33" (UID: "73299ef4-0062-46e4-a329-078037f1ef33"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.581404 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "73299ef4-0062-46e4-a329-078037f1ef33" (UID: "73299ef4-0062-46e4-a329-078037f1ef33"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.581469 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "73299ef4-0062-46e4-a329-078037f1ef33" (UID: "73299ef4-0062-46e4-a329-078037f1ef33"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.581433 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-lib-modules\") pod \"73299ef4-0062-46e4-a329-078037f1ef33\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.581496 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-dev" (OuterVolumeSpecName: "dev") pod "73299ef4-0062-46e4-a329-078037f1ef33" (UID: "73299ef4-0062-46e4-a329-078037f1ef33"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.581549 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-scripts\") pod \"73299ef4-0062-46e4-a329-078037f1ef33\" (UID: \"73299ef4-0062-46e4-a329-078037f1ef33\") " Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.581933 4869 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.581951 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.581965 4869 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.581975 4869 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-sys\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.581985 4869 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.581996 4869 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.582010 4869 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.582021 4869 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.582032 4869 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-dev\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.582042 4869 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73299ef4-0062-46e4-a329-078037f1ef33-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.587686 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-scripts" (OuterVolumeSpecName: "scripts") pod "73299ef4-0062-46e4-a329-078037f1ef33" (UID: "73299ef4-0062-46e4-a329-078037f1ef33"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.587750 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "73299ef4-0062-46e4-a329-078037f1ef33" (UID: "73299ef4-0062-46e4-a329-078037f1ef33"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.587849 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73299ef4-0062-46e4-a329-078037f1ef33-kube-api-access-78mzb" (OuterVolumeSpecName: "kube-api-access-78mzb") pod "73299ef4-0062-46e4-a329-078037f1ef33" (UID: "73299ef4-0062-46e4-a329-078037f1ef33"). InnerVolumeSpecName "kube-api-access-78mzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.614147 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73299ef4-0062-46e4-a329-078037f1ef33" (UID: "73299ef4-0062-46e4-a329-078037f1ef33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.647985 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-config-data" (OuterVolumeSpecName: "config-data") pod "73299ef4-0062-46e4-a329-078037f1ef33" (UID: "73299ef4-0062-46e4-a329-078037f1ef33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.683066 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78mzb\" (UniqueName: \"kubernetes.io/projected/73299ef4-0062-46e4-a329-078037f1ef33-kube-api-access-78mzb\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.683098 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.683110 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.683121 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.683132 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73299ef4-0062-46e4-a329-078037f1ef33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:36 crc kubenswrapper[4869]: I0130 22:04:36.910534 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinderb570-account-delete-tg9jr"] Jan 30 22:04:37 crc kubenswrapper[4869]: I0130 22:04:37.072369 4869 generic.go:334] "Generic (PLEG): container finished" podID="fe46942a-6439-4fdb-a1cf-3db1686ae85b" containerID="2553451c001f7f14ccefb67ebdb2cfc4e5f3237a9f51778617d9cc4c56fb04c6" exitCode=0 Jan 30 22:04:37 crc kubenswrapper[4869]: I0130 22:04:37.072432 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"fe46942a-6439-4fdb-a1cf-3db1686ae85b","Type":"ContainerDied","Data":"2553451c001f7f14ccefb67ebdb2cfc4e5f3237a9f51778617d9cc4c56fb04c6"} Jan 30 22:04:37 crc kubenswrapper[4869]: I0130 22:04:37.074803 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinderb570-account-delete-tg9jr" event={"ID":"6e9b7b24-bbd9-4f92-bb42-3bfab462e2be","Type":"ContainerStarted","Data":"e54fe280205f95b79153a7f66e919e059f74ff96307ba80b4f0a9078c46ac33c"} Jan 30 22:04:37 crc kubenswrapper[4869]: I0130 22:04:37.074867 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinderb570-account-delete-tg9jr" event={"ID":"6e9b7b24-bbd9-4f92-bb42-3bfab462e2be","Type":"ContainerStarted","Data":"69b799b3772b9c0cc570e0c7034ed28e9828311afe019b3d491d3cfed58888f6"} Jan 30 22:04:37 crc kubenswrapper[4869]: I0130 22:04:37.077079 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 30 22:04:37 crc kubenswrapper[4869]: I0130 22:04:37.077126 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"73299ef4-0062-46e4-a329-078037f1ef33","Type":"ContainerDied","Data":"a6677262d3ae53f45575e1e86d517948f7c47210ce30377dee8b2474a9fd0bb3"} Jan 30 22:04:37 crc kubenswrapper[4869]: I0130 22:04:37.077160 4869 scope.go:117] "RemoveContainer" containerID="4d71b98907b306fef7129f27073070194839811cb77cb2abbcad4629b93cbbe7" Jan 30 22:04:37 crc kubenswrapper[4869]: I0130 22:04:37.079479 4869 generic.go:334] "Generic (PLEG): container finished" podID="97c08a25-b1da-4776-ad22-5f474151f2e6" containerID="bd96eebc3f75ff6c352f30d97b60d4479ec4ba8b5644f88fe62e1f98e8819efd" exitCode=0 Jan 30 22:04:37 crc kubenswrapper[4869]: I0130 22:04:37.079543 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"97c08a25-b1da-4776-ad22-5f474151f2e6","Type":"ContainerDied","Data":"bd96eebc3f75ff6c352f30d97b60d4479ec4ba8b5644f88fe62e1f98e8819efd"} Jan 30 22:04:37 crc kubenswrapper[4869]: I0130 22:04:37.082486 4869 generic.go:334] "Generic (PLEG): container finished" podID="a6955a5a-a679-4af8-8f79-0b849abdcb4f" containerID="6b891da32dae8ee96fd5ba92141922a42aebd27dae4e2b5460da8a39b7e4a886" exitCode=143 Jan 30 22:04:37 crc kubenswrapper[4869]: I0130 22:04:37.082519 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"a6955a5a-a679-4af8-8f79-0b849abdcb4f","Type":"ContainerDied","Data":"6b891da32dae8ee96fd5ba92141922a42aebd27dae4e2b5460da8a39b7e4a886"} Jan 30 22:04:37 crc kubenswrapper[4869]: I0130 22:04:37.095010 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinderb570-account-delete-tg9jr" podStartSLOduration=1.094992077 podStartE2EDuration="1.094992077s" podCreationTimestamp="2026-01-30 22:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:37.089794994 +0000 UTC m=+1277.975553019" watchObservedRunningTime="2026-01-30 22:04:37.094992077 +0000 UTC m=+1277.980750102" Jan 30 22:04:37 crc kubenswrapper[4869]: I0130 22:04:37.118433 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 30 22:04:37 crc kubenswrapper[4869]: I0130 22:04:37.129025 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 30 22:04:37 crc kubenswrapper[4869]: I0130 22:04:37.130705 4869 scope.go:117] "RemoveContainer" containerID="8da988f07fe8049664b374efe74dd964fdf796cfbd3c1bd1dce72c2eee4d263a" Jan 30 22:04:37 crc kubenswrapper[4869]: I0130 22:04:37.886203 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fdf0be0-e87f-4a5f-af90-905de1b95f1d" path="/var/lib/kubelet/pods/0fdf0be0-e87f-4a5f-af90-905de1b95f1d/volumes" Jan 30 22:04:37 crc kubenswrapper[4869]: I0130 22:04:37.887344 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73299ef4-0062-46e4-a329-078037f1ef33" path="/var/lib/kubelet/pods/73299ef4-0062-46e4-a329-078037f1ef33/volumes" Jan 30 22:04:38 crc kubenswrapper[4869]: I0130 22:04:38.090505 4869 generic.go:334] "Generic (PLEG): container finished" podID="6e9b7b24-bbd9-4f92-bb42-3bfab462e2be" containerID="e54fe280205f95b79153a7f66e919e059f74ff96307ba80b4f0a9078c46ac33c" exitCode=0 Jan 30 22:04:38 crc kubenswrapper[4869]: I0130 22:04:38.090597 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinderb570-account-delete-tg9jr" event={"ID":"6e9b7b24-bbd9-4f92-bb42-3bfab462e2be","Type":"ContainerDied","Data":"e54fe280205f95b79153a7f66e919e059f74ff96307ba80b4f0a9078c46ac33c"} Jan 30 22:04:39 crc kubenswrapper[4869]: I0130 22:04:39.352168 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinderb570-account-delete-tg9jr" Jan 30 22:04:39 crc kubenswrapper[4869]: I0130 22:04:39.422016 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s5mg\" (UniqueName: \"kubernetes.io/projected/6e9b7b24-bbd9-4f92-bb42-3bfab462e2be-kube-api-access-7s5mg\") pod \"6e9b7b24-bbd9-4f92-bb42-3bfab462e2be\" (UID: \"6e9b7b24-bbd9-4f92-bb42-3bfab462e2be\") " Jan 30 22:04:39 crc kubenswrapper[4869]: I0130 22:04:39.422172 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e9b7b24-bbd9-4f92-bb42-3bfab462e2be-operator-scripts\") pod \"6e9b7b24-bbd9-4f92-bb42-3bfab462e2be\" (UID: \"6e9b7b24-bbd9-4f92-bb42-3bfab462e2be\") " Jan 30 22:04:39 crc kubenswrapper[4869]: I0130 22:04:39.423005 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e9b7b24-bbd9-4f92-bb42-3bfab462e2be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e9b7b24-bbd9-4f92-bb42-3bfab462e2be" (UID: "6e9b7b24-bbd9-4f92-bb42-3bfab462e2be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:39 crc kubenswrapper[4869]: I0130 22:04:39.429806 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e9b7b24-bbd9-4f92-bb42-3bfab462e2be-kube-api-access-7s5mg" (OuterVolumeSpecName: "kube-api-access-7s5mg") pod "6e9b7b24-bbd9-4f92-bb42-3bfab462e2be" (UID: "6e9b7b24-bbd9-4f92-bb42-3bfab462e2be"). InnerVolumeSpecName "kube-api-access-7s5mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:39 crc kubenswrapper[4869]: I0130 22:04:39.523234 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s5mg\" (UniqueName: \"kubernetes.io/projected/6e9b7b24-bbd9-4f92-bb42-3bfab462e2be-kube-api-access-7s5mg\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:39 crc kubenswrapper[4869]: I0130 22:04:39.523273 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e9b7b24-bbd9-4f92-bb42-3bfab462e2be-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.118116 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinderb570-account-delete-tg9jr" event={"ID":"6e9b7b24-bbd9-4f92-bb42-3bfab462e2be","Type":"ContainerDied","Data":"69b799b3772b9c0cc570e0c7034ed28e9828311afe019b3d491d3cfed58888f6"} Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.118199 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69b799b3772b9c0cc570e0c7034ed28e9828311afe019b3d491d3cfed58888f6" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.118325 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinderb570-account-delete-tg9jr" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.492232 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-0" podUID="a6955a5a-a679-4af8-8f79-0b849abdcb4f" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.109:8776/healthcheck\": read tcp 10.217.0.2:59718->10.217.0.109:8776: read: connection reset by peer" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.714554 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.796968 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.840810 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-lib-modules\") pod \"97c08a25-b1da-4776-ad22-5f474151f2e6\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.840884 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-var-lib-cinder\") pod \"97c08a25-b1da-4776-ad22-5f474151f2e6\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.840955 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-combined-ca-bundle\") pod \"97c08a25-b1da-4776-ad22-5f474151f2e6\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.840997 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-scripts\") pod \"97c08a25-b1da-4776-ad22-5f474151f2e6\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.841023 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-dev\") pod \"97c08a25-b1da-4776-ad22-5f474151f2e6\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.841045 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x42xr\" (UniqueName: \"kubernetes.io/projected/97c08a25-b1da-4776-ad22-5f474151f2e6-kube-api-access-x42xr\") pod \"97c08a25-b1da-4776-ad22-5f474151f2e6\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.841078 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-var-locks-cinder\") pod \"97c08a25-b1da-4776-ad22-5f474151f2e6\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.841095 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-sys\") pod \"97c08a25-b1da-4776-ad22-5f474151f2e6\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.841134 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-etc-nvme\") pod \"97c08a25-b1da-4776-ad22-5f474151f2e6\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.841166 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-etc-iscsi\") pod \"97c08a25-b1da-4776-ad22-5f474151f2e6\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.841198 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-etc-machine-id\") pod \"97c08a25-b1da-4776-ad22-5f474151f2e6\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.841240 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-var-locks-brick\") pod \"97c08a25-b1da-4776-ad22-5f474151f2e6\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.841308 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-run\") pod \"97c08a25-b1da-4776-ad22-5f474151f2e6\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.841335 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-config-data-custom\") pod \"97c08a25-b1da-4776-ad22-5f474151f2e6\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.841363 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-config-data\") pod \"97c08a25-b1da-4776-ad22-5f474151f2e6\" (UID: \"97c08a25-b1da-4776-ad22-5f474151f2e6\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.841874 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "97c08a25-b1da-4776-ad22-5f474151f2e6" (UID: "97c08a25-b1da-4776-ad22-5f474151f2e6"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.842008 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "97c08a25-b1da-4776-ad22-5f474151f2e6" (UID: "97c08a25-b1da-4776-ad22-5f474151f2e6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.842034 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "97c08a25-b1da-4776-ad22-5f474151f2e6" (UID: "97c08a25-b1da-4776-ad22-5f474151f2e6"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.843022 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "97c08a25-b1da-4776-ad22-5f474151f2e6" (UID: "97c08a25-b1da-4776-ad22-5f474151f2e6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.843051 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "97c08a25-b1da-4776-ad22-5f474151f2e6" (UID: "97c08a25-b1da-4776-ad22-5f474151f2e6"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.843068 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "97c08a25-b1da-4776-ad22-5f474151f2e6" (UID: "97c08a25-b1da-4776-ad22-5f474151f2e6"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.843090 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-sys" (OuterVolumeSpecName: "sys") pod "97c08a25-b1da-4776-ad22-5f474151f2e6" (UID: "97c08a25-b1da-4776-ad22-5f474151f2e6"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.843099 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-dev" (OuterVolumeSpecName: "dev") pod "97c08a25-b1da-4776-ad22-5f474151f2e6" (UID: "97c08a25-b1da-4776-ad22-5f474151f2e6"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.843223 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-run" (OuterVolumeSpecName: "run") pod "97c08a25-b1da-4776-ad22-5f474151f2e6" (UID: "97c08a25-b1da-4776-ad22-5f474151f2e6"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.843264 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "97c08a25-b1da-4776-ad22-5f474151f2e6" (UID: "97c08a25-b1da-4776-ad22-5f474151f2e6"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.851461 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-scripts" (OuterVolumeSpecName: "scripts") pod "97c08a25-b1da-4776-ad22-5f474151f2e6" (UID: "97c08a25-b1da-4776-ad22-5f474151f2e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.851515 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97c08a25-b1da-4776-ad22-5f474151f2e6-kube-api-access-x42xr" (OuterVolumeSpecName: "kube-api-access-x42xr") pod "97c08a25-b1da-4776-ad22-5f474151f2e6" (UID: "97c08a25-b1da-4776-ad22-5f474151f2e6"). InnerVolumeSpecName "kube-api-access-x42xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.851550 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "97c08a25-b1da-4776-ad22-5f474151f2e6" (UID: "97c08a25-b1da-4776-ad22-5f474151f2e6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.862574 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.896324 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97c08a25-b1da-4776-ad22-5f474151f2e6" (UID: "97c08a25-b1da-4776-ad22-5f474151f2e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.929031 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-config-data" (OuterVolumeSpecName: "config-data") pod "97c08a25-b1da-4776-ad22-5f474151f2e6" (UID: "97c08a25-b1da-4776-ad22-5f474151f2e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942264 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-config-data-custom\") pod \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942329 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6955a5a-a679-4af8-8f79-0b849abdcb4f-logs\") pod \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942351 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4fnf\" (UniqueName: \"kubernetes.io/projected/a6955a5a-a679-4af8-8f79-0b849abdcb4f-kube-api-access-h4fnf\") pod \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942392 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-config-data\") pod \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942434 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6955a5a-a679-4af8-8f79-0b849abdcb4f-etc-machine-id\") pod \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942455 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-scripts\") pod \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942485 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-combined-ca-bundle\") pod \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942524 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-internal-tls-certs\") pod \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942553 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-public-tls-certs\") pod \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\" (UID: \"a6955a5a-a679-4af8-8f79-0b849abdcb4f\") " Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942843 4869 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942856 4869 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942866 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942874 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942882 4869 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942909 4869 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942926 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942940 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97c08a25-b1da-4776-ad22-5f474151f2e6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942952 4869 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-dev\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942964 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x42xr\" (UniqueName: \"kubernetes.io/projected/97c08a25-b1da-4776-ad22-5f474151f2e6-kube-api-access-x42xr\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942977 4869 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942988 4869 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-sys\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.942998 4869 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.943008 4869 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.943017 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97c08a25-b1da-4776-ad22-5f474151f2e6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.946179 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6955a5a-a679-4af8-8f79-0b849abdcb4f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a6955a5a-a679-4af8-8f79-0b849abdcb4f" (UID: "a6955a5a-a679-4af8-8f79-0b849abdcb4f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.946774 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6955a5a-a679-4af8-8f79-0b849abdcb4f-logs" (OuterVolumeSpecName: "logs") pod "a6955a5a-a679-4af8-8f79-0b849abdcb4f" (UID: "a6955a5a-a679-4af8-8f79-0b849abdcb4f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.948636 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-scripts" (OuterVolumeSpecName: "scripts") pod "a6955a5a-a679-4af8-8f79-0b849abdcb4f" (UID: "a6955a5a-a679-4af8-8f79-0b849abdcb4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.949791 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6955a5a-a679-4af8-8f79-0b849abdcb4f-kube-api-access-h4fnf" (OuterVolumeSpecName: "kube-api-access-h4fnf") pod "a6955a5a-a679-4af8-8f79-0b849abdcb4f" (UID: "a6955a5a-a679-4af8-8f79-0b849abdcb4f"). InnerVolumeSpecName "kube-api-access-h4fnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.950702 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a6955a5a-a679-4af8-8f79-0b849abdcb4f" (UID: "a6955a5a-a679-4af8-8f79-0b849abdcb4f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.969857 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6955a5a-a679-4af8-8f79-0b849abdcb4f" (UID: "a6955a5a-a679-4af8-8f79-0b849abdcb4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.979598 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a6955a5a-a679-4af8-8f79-0b849abdcb4f" (UID: "a6955a5a-a679-4af8-8f79-0b849abdcb4f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.981079 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-config-data" (OuterVolumeSpecName: "config-data") pod "a6955a5a-a679-4af8-8f79-0b849abdcb4f" (UID: "a6955a5a-a679-4af8-8f79-0b849abdcb4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:40 crc kubenswrapper[4869]: I0130 22:04:40.982366 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a6955a5a-a679-4af8-8f79-0b849abdcb4f" (UID: "a6955a5a-a679-4af8-8f79-0b849abdcb4f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.043813 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-scripts\") pod \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.043908 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grr56\" (UniqueName: \"kubernetes.io/projected/fe46942a-6439-4fdb-a1cf-3db1686ae85b-kube-api-access-grr56\") pod \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.043951 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-config-data-custom\") pod \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.043972 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe46942a-6439-4fdb-a1cf-3db1686ae85b-etc-machine-id\") pod \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.044036 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-config-data\") pod \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.044091 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-combined-ca-bundle\") pod \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\" (UID: \"fe46942a-6439-4fdb-a1cf-3db1686ae85b\") " Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.044349 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.044363 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6955a5a-a679-4af8-8f79-0b849abdcb4f-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.044372 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4fnf\" (UniqueName: \"kubernetes.io/projected/a6955a5a-a679-4af8-8f79-0b849abdcb4f-kube-api-access-h4fnf\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.044381 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.044392 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6955a5a-a679-4af8-8f79-0b849abdcb4f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.044400 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.044408 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.044416 4869 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.044424 4869 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6955a5a-a679-4af8-8f79-0b849abdcb4f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.044536 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe46942a-6439-4fdb-a1cf-3db1686ae85b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fe46942a-6439-4fdb-a1cf-3db1686ae85b" (UID: "fe46942a-6439-4fdb-a1cf-3db1686ae85b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.047323 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fe46942a-6439-4fdb-a1cf-3db1686ae85b" (UID: "fe46942a-6439-4fdb-a1cf-3db1686ae85b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.047332 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe46942a-6439-4fdb-a1cf-3db1686ae85b-kube-api-access-grr56" (OuterVolumeSpecName: "kube-api-access-grr56") pod "fe46942a-6439-4fdb-a1cf-3db1686ae85b" (UID: "fe46942a-6439-4fdb-a1cf-3db1686ae85b"). InnerVolumeSpecName "kube-api-access-grr56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.047388 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-scripts" (OuterVolumeSpecName: "scripts") pod "fe46942a-6439-4fdb-a1cf-3db1686ae85b" (UID: "fe46942a-6439-4fdb-a1cf-3db1686ae85b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.082442 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-lpvxm"] Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.085111 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe46942a-6439-4fdb-a1cf-3db1686ae85b" (UID: "fe46942a-6439-4fdb-a1cf-3db1686ae85b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.089516 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-lpvxm"] Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.096733 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn"] Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.102568 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinderb570-account-delete-tg9jr"] Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.107890 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinderb570-account-delete-tg9jr"] Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.112405 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-config-data" (OuterVolumeSpecName: "config-data") pod "fe46942a-6439-4fdb-a1cf-3db1686ae85b" (UID: "fe46942a-6439-4fdb-a1cf-3db1686ae85b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.112774 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-b570-account-create-update-9sfxn"] Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.126251 4869 generic.go:334] "Generic (PLEG): container finished" podID="fe46942a-6439-4fdb-a1cf-3db1686ae85b" containerID="de4373d4427bb99305dcfaa5765d5051baa2e853f03446cf0852b6bee9bad816" exitCode=0 Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.126303 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"fe46942a-6439-4fdb-a1cf-3db1686ae85b","Type":"ContainerDied","Data":"de4373d4427bb99305dcfaa5765d5051baa2e853f03446cf0852b6bee9bad816"} Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.126332 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"fe46942a-6439-4fdb-a1cf-3db1686ae85b","Type":"ContainerDied","Data":"a5f14c920e97ef855e05783cc40148494e51a9bf9f89120b66325d2ddf631b79"} Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.126349 4869 scope.go:117] "RemoveContainer" containerID="2553451c001f7f14ccefb67ebdb2cfc4e5f3237a9f51778617d9cc4c56fb04c6" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.126490 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.132495 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.133176 4869 generic.go:334] "Generic (PLEG): container finished" podID="97c08a25-b1da-4776-ad22-5f474151f2e6" containerID="bc4a063ae0b88689083ee17f00831ac0e61f02e570a32f62bb23bdb6f6331483" exitCode=0 Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.133301 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"97c08a25-b1da-4776-ad22-5f474151f2e6","Type":"ContainerDied","Data":"bc4a063ae0b88689083ee17f00831ac0e61f02e570a32f62bb23bdb6f6331483"} Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.133333 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"97c08a25-b1da-4776-ad22-5f474151f2e6","Type":"ContainerDied","Data":"6ba9a7a4767e7672cd825af26edba370b564ba74bbfea727ef68d081e5f545da"} Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.136655 4869 generic.go:334] "Generic (PLEG): container finished" podID="a6955a5a-a679-4af8-8f79-0b849abdcb4f" containerID="4f973521da247eda0b7b0afd610d6529feef403d778d35a089630bede6fcfc9e" exitCode=0 Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.136754 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.136700 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"a6955a5a-a679-4af8-8f79-0b849abdcb4f","Type":"ContainerDied","Data":"4f973521da247eda0b7b0afd610d6529feef403d778d35a089630bede6fcfc9e"} Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.136923 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"a6955a5a-a679-4af8-8f79-0b849abdcb4f","Type":"ContainerDied","Data":"5d87686d8237d63aa796cd13814ea5103a587df078ccf36e9821501512aa36dd"} Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.145520 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.145568 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.145578 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.145587 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grr56\" (UniqueName: \"kubernetes.io/projected/fe46942a-6439-4fdb-a1cf-3db1686ae85b-kube-api-access-grr56\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.145596 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe46942a-6439-4fdb-a1cf-3db1686ae85b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.145623 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe46942a-6439-4fdb-a1cf-3db1686ae85b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.163174 4869 scope.go:117] "RemoveContainer" containerID="de4373d4427bb99305dcfaa5765d5051baa2e853f03446cf0852b6bee9bad816" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.166698 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.175123 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.183636 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.189744 4869 scope.go:117] "RemoveContainer" containerID="2553451c001f7f14ccefb67ebdb2cfc4e5f3237a9f51778617d9cc4c56fb04c6" Jan 30 22:04:41 crc kubenswrapper[4869]: E0130 22:04:41.190597 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2553451c001f7f14ccefb67ebdb2cfc4e5f3237a9f51778617d9cc4c56fb04c6\": container with ID starting with 2553451c001f7f14ccefb67ebdb2cfc4e5f3237a9f51778617d9cc4c56fb04c6 not found: ID does not exist" containerID="2553451c001f7f14ccefb67ebdb2cfc4e5f3237a9f51778617d9cc4c56fb04c6" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.190641 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2553451c001f7f14ccefb67ebdb2cfc4e5f3237a9f51778617d9cc4c56fb04c6"} err="failed to get container status \"2553451c001f7f14ccefb67ebdb2cfc4e5f3237a9f51778617d9cc4c56fb04c6\": rpc error: code = NotFound desc = could not find container \"2553451c001f7f14ccefb67ebdb2cfc4e5f3237a9f51778617d9cc4c56fb04c6\": container with ID starting with 2553451c001f7f14ccefb67ebdb2cfc4e5f3237a9f51778617d9cc4c56fb04c6 not found: ID does not exist" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.190665 4869 scope.go:117] "RemoveContainer" containerID="de4373d4427bb99305dcfaa5765d5051baa2e853f03446cf0852b6bee9bad816" Jan 30 22:04:41 crc kubenswrapper[4869]: E0130 22:04:41.190960 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de4373d4427bb99305dcfaa5765d5051baa2e853f03446cf0852b6bee9bad816\": container with ID starting with de4373d4427bb99305dcfaa5765d5051baa2e853f03446cf0852b6bee9bad816 not found: ID does not exist" containerID="de4373d4427bb99305dcfaa5765d5051baa2e853f03446cf0852b6bee9bad816" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.191001 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de4373d4427bb99305dcfaa5765d5051baa2e853f03446cf0852b6bee9bad816"} err="failed to get container status \"de4373d4427bb99305dcfaa5765d5051baa2e853f03446cf0852b6bee9bad816\": rpc error: code = NotFound desc = could not find container \"de4373d4427bb99305dcfaa5765d5051baa2e853f03446cf0852b6bee9bad816\": container with ID starting with de4373d4427bb99305dcfaa5765d5051baa2e853f03446cf0852b6bee9bad816 not found: ID does not exist" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.191029 4869 scope.go:117] "RemoveContainer" containerID="bd96eebc3f75ff6c352f30d97b60d4479ec4ba8b5644f88fe62e1f98e8819efd" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.193265 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.199188 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.204411 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.205163 4869 scope.go:117] "RemoveContainer" containerID="bc4a063ae0b88689083ee17f00831ac0e61f02e570a32f62bb23bdb6f6331483" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.219810 4869 scope.go:117] "RemoveContainer" containerID="bd96eebc3f75ff6c352f30d97b60d4479ec4ba8b5644f88fe62e1f98e8819efd" Jan 30 22:04:41 crc kubenswrapper[4869]: E0130 22:04:41.220177 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd96eebc3f75ff6c352f30d97b60d4479ec4ba8b5644f88fe62e1f98e8819efd\": container with ID starting with bd96eebc3f75ff6c352f30d97b60d4479ec4ba8b5644f88fe62e1f98e8819efd not found: ID does not exist" containerID="bd96eebc3f75ff6c352f30d97b60d4479ec4ba8b5644f88fe62e1f98e8819efd" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.220207 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd96eebc3f75ff6c352f30d97b60d4479ec4ba8b5644f88fe62e1f98e8819efd"} err="failed to get container status \"bd96eebc3f75ff6c352f30d97b60d4479ec4ba8b5644f88fe62e1f98e8819efd\": rpc error: code = NotFound desc = could not find container \"bd96eebc3f75ff6c352f30d97b60d4479ec4ba8b5644f88fe62e1f98e8819efd\": container with ID starting with bd96eebc3f75ff6c352f30d97b60d4479ec4ba8b5644f88fe62e1f98e8819efd not found: ID does not exist" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.220227 4869 scope.go:117] "RemoveContainer" containerID="bc4a063ae0b88689083ee17f00831ac0e61f02e570a32f62bb23bdb6f6331483" Jan 30 22:04:41 crc kubenswrapper[4869]: E0130 22:04:41.220565 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc4a063ae0b88689083ee17f00831ac0e61f02e570a32f62bb23bdb6f6331483\": container with ID starting with bc4a063ae0b88689083ee17f00831ac0e61f02e570a32f62bb23bdb6f6331483 not found: ID does not exist" containerID="bc4a063ae0b88689083ee17f00831ac0e61f02e570a32f62bb23bdb6f6331483" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.220584 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc4a063ae0b88689083ee17f00831ac0e61f02e570a32f62bb23bdb6f6331483"} err="failed to get container status \"bc4a063ae0b88689083ee17f00831ac0e61f02e570a32f62bb23bdb6f6331483\": rpc error: code = NotFound desc = could not find container \"bc4a063ae0b88689083ee17f00831ac0e61f02e570a32f62bb23bdb6f6331483\": container with ID starting with bc4a063ae0b88689083ee17f00831ac0e61f02e570a32f62bb23bdb6f6331483 not found: ID does not exist" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.220597 4869 scope.go:117] "RemoveContainer" containerID="4f973521da247eda0b7b0afd610d6529feef403d778d35a089630bede6fcfc9e" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.235588 4869 scope.go:117] "RemoveContainer" containerID="6b891da32dae8ee96fd5ba92141922a42aebd27dae4e2b5460da8a39b7e4a886" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.250562 4869 scope.go:117] "RemoveContainer" containerID="4f973521da247eda0b7b0afd610d6529feef403d778d35a089630bede6fcfc9e" Jan 30 22:04:41 crc kubenswrapper[4869]: E0130 22:04:41.252289 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f973521da247eda0b7b0afd610d6529feef403d778d35a089630bede6fcfc9e\": container with ID starting with 4f973521da247eda0b7b0afd610d6529feef403d778d35a089630bede6fcfc9e not found: ID does not exist" containerID="4f973521da247eda0b7b0afd610d6529feef403d778d35a089630bede6fcfc9e" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.252326 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f973521da247eda0b7b0afd610d6529feef403d778d35a089630bede6fcfc9e"} err="failed to get container status \"4f973521da247eda0b7b0afd610d6529feef403d778d35a089630bede6fcfc9e\": rpc error: code = NotFound desc = could not find container \"4f973521da247eda0b7b0afd610d6529feef403d778d35a089630bede6fcfc9e\": container with ID starting with 4f973521da247eda0b7b0afd610d6529feef403d778d35a089630bede6fcfc9e not found: ID does not exist" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.252354 4869 scope.go:117] "RemoveContainer" containerID="6b891da32dae8ee96fd5ba92141922a42aebd27dae4e2b5460da8a39b7e4a886" Jan 30 22:04:41 crc kubenswrapper[4869]: E0130 22:04:41.252666 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b891da32dae8ee96fd5ba92141922a42aebd27dae4e2b5460da8a39b7e4a886\": container with ID starting with 6b891da32dae8ee96fd5ba92141922a42aebd27dae4e2b5460da8a39b7e4a886 not found: ID does not exist" containerID="6b891da32dae8ee96fd5ba92141922a42aebd27dae4e2b5460da8a39b7e4a886" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.252695 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b891da32dae8ee96fd5ba92141922a42aebd27dae4e2b5460da8a39b7e4a886"} err="failed to get container status \"6b891da32dae8ee96fd5ba92141922a42aebd27dae4e2b5460da8a39b7e4a886\": rpc error: code = NotFound desc = could not find container \"6b891da32dae8ee96fd5ba92141922a42aebd27dae4e2b5460da8a39b7e4a886\": container with ID starting with 6b891da32dae8ee96fd5ba92141922a42aebd27dae4e2b5460da8a39b7e4a886 not found: ID does not exist" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.885138 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3baac441-39df-4456-a3dd-2bf91773c4aa" path="/var/lib/kubelet/pods/3baac441-39df-4456-a3dd-2bf91773c4aa/volumes" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.886176 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e9b7b24-bbd9-4f92-bb42-3bfab462e2be" path="/var/lib/kubelet/pods/6e9b7b24-bbd9-4f92-bb42-3bfab462e2be/volumes" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.886725 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97c08a25-b1da-4776-ad22-5f474151f2e6" path="/var/lib/kubelet/pods/97c08a25-b1da-4776-ad22-5f474151f2e6/volumes" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.887984 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6955a5a-a679-4af8-8f79-0b849abdcb4f" path="/var/lib/kubelet/pods/a6955a5a-a679-4af8-8f79-0b849abdcb4f/volumes" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.888531 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd81195d-554e-4f5f-8aae-4513a7c31542" path="/var/lib/kubelet/pods/bd81195d-554e-4f5f-8aae-4513a7c31542/volumes" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.889055 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe46942a-6439-4fdb-a1cf-3db1686ae85b" path="/var/lib/kubelet/pods/fe46942a-6439-4fdb-a1cf-3db1686ae85b/volumes" Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.992494 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-vmm9m"] Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.994359 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-jj9nl"] Jan 30 22:04:41 crc kubenswrapper[4869]: I0130 22:04:41.999719 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-vmm9m"] Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.004857 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-jj9nl"] Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.029241 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-5b988b97cc-bzpmz"] Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.029508 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" podUID="44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce" containerName="keystone-api" containerID="cri-o://75f1036fa6f0f1ff3b7576bfb9f5e9ac887e0a12fa3839de2963ceac60f4410e" gracePeriod=30 Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.059097 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone431c-account-delete-vlh6j"] Jan 30 22:04:42 crc kubenswrapper[4869]: E0130 22:04:42.064291 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe46942a-6439-4fdb-a1cf-3db1686ae85b" containerName="probe" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064323 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe46942a-6439-4fdb-a1cf-3db1686ae85b" containerName="probe" Jan 30 22:04:42 crc kubenswrapper[4869]: E0130 22:04:42.064339 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73299ef4-0062-46e4-a329-078037f1ef33" containerName="cinder-volume" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064346 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="73299ef4-0062-46e4-a329-078037f1ef33" containerName="cinder-volume" Jan 30 22:04:42 crc kubenswrapper[4869]: E0130 22:04:42.064362 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6955a5a-a679-4af8-8f79-0b849abdcb4f" containerName="cinder-api" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064369 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6955a5a-a679-4af8-8f79-0b849abdcb4f" containerName="cinder-api" Jan 30 22:04:42 crc kubenswrapper[4869]: E0130 22:04:42.064383 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe46942a-6439-4fdb-a1cf-3db1686ae85b" containerName="cinder-scheduler" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064391 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe46942a-6439-4fdb-a1cf-3db1686ae85b" containerName="cinder-scheduler" Jan 30 22:04:42 crc kubenswrapper[4869]: E0130 22:04:42.064405 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c08a25-b1da-4776-ad22-5f474151f2e6" containerName="probe" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064412 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c08a25-b1da-4776-ad22-5f474151f2e6" containerName="probe" Jan 30 22:04:42 crc kubenswrapper[4869]: E0130 22:04:42.064420 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c08a25-b1da-4776-ad22-5f474151f2e6" containerName="cinder-backup" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064427 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c08a25-b1da-4776-ad22-5f474151f2e6" containerName="cinder-backup" Jan 30 22:04:42 crc kubenswrapper[4869]: E0130 22:04:42.064441 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9b7b24-bbd9-4f92-bb42-3bfab462e2be" containerName="mariadb-account-delete" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064448 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9b7b24-bbd9-4f92-bb42-3bfab462e2be" containerName="mariadb-account-delete" Jan 30 22:04:42 crc kubenswrapper[4869]: E0130 22:04:42.064460 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73299ef4-0062-46e4-a329-078037f1ef33" containerName="probe" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064467 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="73299ef4-0062-46e4-a329-078037f1ef33" containerName="probe" Jan 30 22:04:42 crc kubenswrapper[4869]: E0130 22:04:42.064476 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73299ef4-0062-46e4-a329-078037f1ef33" containerName="probe" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064483 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="73299ef4-0062-46e4-a329-078037f1ef33" containerName="probe" Jan 30 22:04:42 crc kubenswrapper[4869]: E0130 22:04:42.064492 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6955a5a-a679-4af8-8f79-0b849abdcb4f" containerName="cinder-api-log" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064499 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6955a5a-a679-4af8-8f79-0b849abdcb4f" containerName="cinder-api-log" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064627 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="73299ef4-0062-46e4-a329-078037f1ef33" containerName="cinder-volume" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064640 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="73299ef4-0062-46e4-a329-078037f1ef33" containerName="cinder-volume" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064652 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="73299ef4-0062-46e4-a329-078037f1ef33" containerName="probe" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064663 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6955a5a-a679-4af8-8f79-0b849abdcb4f" containerName="cinder-api-log" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064676 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="97c08a25-b1da-4776-ad22-5f474151f2e6" containerName="probe" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064683 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe46942a-6439-4fdb-a1cf-3db1686ae85b" containerName="probe" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064695 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9b7b24-bbd9-4f92-bb42-3bfab462e2be" containerName="mariadb-account-delete" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064705 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="97c08a25-b1da-4776-ad22-5f474151f2e6" containerName="cinder-backup" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064716 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe46942a-6439-4fdb-a1cf-3db1686ae85b" containerName="cinder-scheduler" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.064730 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6955a5a-a679-4af8-8f79-0b849abdcb4f" containerName="cinder-api" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.065364 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.065569 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone431c-account-delete-vlh6j"] Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.259367 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ec6fcd-3470-4626-a408-92e100dabfdd-operator-scripts\") pod \"keystone431c-account-delete-vlh6j\" (UID: \"a6ec6fcd-3470-4626-a408-92e100dabfdd\") " pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.259417 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhj5p\" (UniqueName: \"kubernetes.io/projected/a6ec6fcd-3470-4626-a408-92e100dabfdd-kube-api-access-hhj5p\") pod \"keystone431c-account-delete-vlh6j\" (UID: \"a6ec6fcd-3470-4626-a408-92e100dabfdd\") " pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.361310 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ec6fcd-3470-4626-a408-92e100dabfdd-operator-scripts\") pod \"keystone431c-account-delete-vlh6j\" (UID: \"a6ec6fcd-3470-4626-a408-92e100dabfdd\") " pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.361361 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhj5p\" (UniqueName: \"kubernetes.io/projected/a6ec6fcd-3470-4626-a408-92e100dabfdd-kube-api-access-hhj5p\") pod \"keystone431c-account-delete-vlh6j\" (UID: \"a6ec6fcd-3470-4626-a408-92e100dabfdd\") " pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.361997 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ec6fcd-3470-4626-a408-92e100dabfdd-operator-scripts\") pod \"keystone431c-account-delete-vlh6j\" (UID: \"a6ec6fcd-3470-4626-a408-92e100dabfdd\") " pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.378410 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhj5p\" (UniqueName: \"kubernetes.io/projected/a6ec6fcd-3470-4626-a408-92e100dabfdd-kube-api-access-hhj5p\") pod \"keystone431c-account-delete-vlh6j\" (UID: \"a6ec6fcd-3470-4626-a408-92e100dabfdd\") " pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.385196 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.658984 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-jxqxs"] Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.664778 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-jxqxs"] Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.684480 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.693939 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.699544 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.810547 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone431c-account-delete-vlh6j"] Jan 30 22:04:42 crc kubenswrapper[4869]: I0130 22:04:42.834883 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/openstack-galera-2" podUID="7dd1249b-e125-4034-a274-8cf26b3e9b3a" containerName="galera" containerID="cri-o://80538292743bdf2763971136905271c50371abe0c0ac9f528cd209927b087ab0" gracePeriod=30 Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.154218 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" event={"ID":"a6ec6fcd-3470-4626-a408-92e100dabfdd","Type":"ContainerStarted","Data":"86ba16aa21d004e3054400c862c2312fc56e70147d4507654c0ad6b8fe2cdbb0"} Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.154563 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" event={"ID":"a6ec6fcd-3470-4626-a408-92e100dabfdd","Type":"ContainerStarted","Data":"e6a00b1a0fa4c3de544402438f3ad26e453450dcec20228231cfa4a13850a860"} Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.154778 4869 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" secret="" err="secret \"galera-openstack-dockercfg-6shmm\" not found" Jan 30 22:04:43 crc kubenswrapper[4869]: E0130 22:04:43.176020 4869 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 30 22:04:43 crc kubenswrapper[4869]: E0130 22:04:43.176089 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a6ec6fcd-3470-4626-a408-92e100dabfdd-operator-scripts podName:a6ec6fcd-3470-4626-a408-92e100dabfdd nodeName:}" failed. No retries permitted until 2026-01-30 22:04:43.676073291 +0000 UTC m=+1284.561831316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a6ec6fcd-3470-4626-a408-92e100dabfdd-operator-scripts") pod "keystone431c-account-delete-vlh6j" (UID: "a6ec6fcd-3470-4626-a408-92e100dabfdd") : configmap "openstack-scripts" not found Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.290872 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" podStartSLOduration=1.290855309 podStartE2EDuration="1.290855309s" podCreationTimestamp="2026-01-30 22:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:43.168958009 +0000 UTC m=+1284.054716034" watchObservedRunningTime="2026-01-30 22:04:43.290855309 +0000 UTC m=+1284.176613334" Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.294451 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.294669 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/memcached-0" podUID="7e27b23c-3307-49b1-93be-8188ed81865f" containerName="memcached" containerID="cri-o://8018f5a2698d9d5047d7754e3fee6586b0f67a2a8d2166d8b425354078600dc0" gracePeriod=30 Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.639579 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 22:04:43 crc kubenswrapper[4869]: E0130 22:04:43.681197 4869 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 30 22:04:43 crc kubenswrapper[4869]: E0130 22:04:43.681273 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a6ec6fcd-3470-4626-a408-92e100dabfdd-operator-scripts podName:a6ec6fcd-3470-4626-a408-92e100dabfdd nodeName:}" failed. No retries permitted until 2026-01-30 22:04:44.681257715 +0000 UTC m=+1285.567015740 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a6ec6fcd-3470-4626-a408-92e100dabfdd-operator-scripts") pod "keystone431c-account-delete-vlh6j" (UID: "a6ec6fcd-3470-4626-a408-92e100dabfdd") : configmap "openstack-scripts" not found Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.761538 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.783020 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd1249b-e125-4034-a274-8cf26b3e9b3a-operator-scripts\") pod \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.783222 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.783245 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7dd1249b-e125-4034-a274-8cf26b3e9b3a-kolla-config\") pod \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.783288 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7dd1249b-e125-4034-a274-8cf26b3e9b3a-config-data-default\") pod \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.783470 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hw7b\" (UniqueName: \"kubernetes.io/projected/7dd1249b-e125-4034-a274-8cf26b3e9b3a-kube-api-access-6hw7b\") pod \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.783550 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7dd1249b-e125-4034-a274-8cf26b3e9b3a-config-data-generated\") pod \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\" (UID: \"7dd1249b-e125-4034-a274-8cf26b3e9b3a\") " Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.784113 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dd1249b-e125-4034-a274-8cf26b3e9b3a-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "7dd1249b-e125-4034-a274-8cf26b3e9b3a" (UID: "7dd1249b-e125-4034-a274-8cf26b3e9b3a"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.784122 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dd1249b-e125-4034-a274-8cf26b3e9b3a-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "7dd1249b-e125-4034-a274-8cf26b3e9b3a" (UID: "7dd1249b-e125-4034-a274-8cf26b3e9b3a"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.784791 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dd1249b-e125-4034-a274-8cf26b3e9b3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7dd1249b-e125-4034-a274-8cf26b3e9b3a" (UID: "7dd1249b-e125-4034-a274-8cf26b3e9b3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.785074 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dd1249b-e125-4034-a274-8cf26b3e9b3a-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "7dd1249b-e125-4034-a274-8cf26b3e9b3a" (UID: "7dd1249b-e125-4034-a274-8cf26b3e9b3a"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.794937 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dd1249b-e125-4034-a274-8cf26b3e9b3a-kube-api-access-6hw7b" (OuterVolumeSpecName: "kube-api-access-6hw7b") pod "7dd1249b-e125-4034-a274-8cf26b3e9b3a" (UID: "7dd1249b-e125-4034-a274-8cf26b3e9b3a"). InnerVolumeSpecName "kube-api-access-6hw7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.800880 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "mysql-db") pod "7dd1249b-e125-4034-a274-8cf26b3e9b3a" (UID: "7dd1249b-e125-4034-a274-8cf26b3e9b3a"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.885080 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hw7b\" (UniqueName: \"kubernetes.io/projected/7dd1249b-e125-4034-a274-8cf26b3e9b3a-kube-api-access-6hw7b\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.885118 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7dd1249b-e125-4034-a274-8cf26b3e9b3a-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.885134 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd1249b-e125-4034-a274-8cf26b3e9b3a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.885160 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.885174 4869 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7dd1249b-e125-4034-a274-8cf26b3e9b3a-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.885186 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7dd1249b-e125-4034-a274-8cf26b3e9b3a-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.886561 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e9514e0-09ca-45de-b21e-d629fd81dc25" path="/var/lib/kubelet/pods/2e9514e0-09ca-45de-b21e-d629fd81dc25/volumes" Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.887880 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="545292dd-58e7-4831-b0b9-02b1bf9621b3" path="/var/lib/kubelet/pods/545292dd-58e7-4831-b0b9-02b1bf9621b3/volumes" Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.888552 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1aed461-8512-40e1-a0db-0cdc39789a69" path="/var/lib/kubelet/pods/c1aed461-8512-40e1-a0db-0cdc39789a69/volumes" Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.898174 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 30 22:04:43 crc kubenswrapper[4869]: I0130 22:04:43.986390 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.107498 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.162760 4869 generic.go:334] "Generic (PLEG): container finished" podID="7dd1249b-e125-4034-a274-8cf26b3e9b3a" containerID="80538292743bdf2763971136905271c50371abe0c0ac9f528cd209927b087ab0" exitCode=0 Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.162813 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-2" Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.162826 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"7dd1249b-e125-4034-a274-8cf26b3e9b3a","Type":"ContainerDied","Data":"80538292743bdf2763971136905271c50371abe0c0ac9f528cd209927b087ab0"} Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.162867 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"7dd1249b-e125-4034-a274-8cf26b3e9b3a","Type":"ContainerDied","Data":"69d59b916ad6489a4eff7fff14575be42d8e0a755d2d88dfdb80731f51ec858e"} Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.162941 4869 scope.go:117] "RemoveContainer" containerID="80538292743bdf2763971136905271c50371abe0c0ac9f528cd209927b087ab0" Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.167017 4869 generic.go:334] "Generic (PLEG): container finished" podID="a6ec6fcd-3470-4626-a408-92e100dabfdd" containerID="86ba16aa21d004e3054400c862c2312fc56e70147d4507654c0ad6b8fe2cdbb0" exitCode=1 Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.167061 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" event={"ID":"a6ec6fcd-3470-4626-a408-92e100dabfdd","Type":"ContainerDied","Data":"86ba16aa21d004e3054400c862c2312fc56e70147d4507654c0ad6b8fe2cdbb0"} Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.167621 4869 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" secret="" err="secret \"galera-openstack-dockercfg-6shmm\" not found" Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.167675 4869 scope.go:117] "RemoveContainer" containerID="86ba16aa21d004e3054400c862c2312fc56e70147d4507654c0ad6b8fe2cdbb0" Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.204633 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/rabbitmq-server-0" podUID="111e74f4-fd99-4f7d-8057-43794129795f" containerName="rabbitmq" containerID="cri-o://34563f83938add6f7bcfa6405c612841cd4dbad688d596ba90fc125ad61fb4be" gracePeriod=604800 Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.211660 4869 scope.go:117] "RemoveContainer" containerID="50169bd0b23f35d9a3d9f5783d49c30ef8902b861f0a95e73712f57cb4bf392a" Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.216992 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.236916 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.266980 4869 scope.go:117] "RemoveContainer" containerID="80538292743bdf2763971136905271c50371abe0c0ac9f528cd209927b087ab0" Jan 30 22:04:44 crc kubenswrapper[4869]: E0130 22:04:44.267679 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80538292743bdf2763971136905271c50371abe0c0ac9f528cd209927b087ab0\": container with ID starting with 80538292743bdf2763971136905271c50371abe0c0ac9f528cd209927b087ab0 not found: ID does not exist" containerID="80538292743bdf2763971136905271c50371abe0c0ac9f528cd209927b087ab0" Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.267728 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80538292743bdf2763971136905271c50371abe0c0ac9f528cd209927b087ab0"} err="failed to get container status \"80538292743bdf2763971136905271c50371abe0c0ac9f528cd209927b087ab0\": rpc error: code = NotFound desc = could not find container \"80538292743bdf2763971136905271c50371abe0c0ac9f528cd209927b087ab0\": container with ID starting with 80538292743bdf2763971136905271c50371abe0c0ac9f528cd209927b087ab0 not found: ID does not exist" Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.267756 4869 scope.go:117] "RemoveContainer" containerID="50169bd0b23f35d9a3d9f5783d49c30ef8902b861f0a95e73712f57cb4bf392a" Jan 30 22:04:44 crc kubenswrapper[4869]: E0130 22:04:44.268463 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50169bd0b23f35d9a3d9f5783d49c30ef8902b861f0a95e73712f57cb4bf392a\": container with ID starting with 50169bd0b23f35d9a3d9f5783d49c30ef8902b861f0a95e73712f57cb4bf392a not found: ID does not exist" containerID="50169bd0b23f35d9a3d9f5783d49c30ef8902b861f0a95e73712f57cb4bf392a" Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.268516 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50169bd0b23f35d9a3d9f5783d49c30ef8902b861f0a95e73712f57cb4bf392a"} err="failed to get container status \"50169bd0b23f35d9a3d9f5783d49c30ef8902b861f0a95e73712f57cb4bf392a\": rpc error: code = NotFound desc = could not find container \"50169bd0b23f35d9a3d9f5783d49c30ef8902b861f0a95e73712f57cb4bf392a\": container with ID starting with 50169bd0b23f35d9a3d9f5783d49c30ef8902b861f0a95e73712f57cb4bf392a not found: ID does not exist" Jan 30 22:04:44 crc kubenswrapper[4869]: E0130 22:04:44.697227 4869 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 30 22:04:44 crc kubenswrapper[4869]: E0130 22:04:44.697609 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a6ec6fcd-3470-4626-a408-92e100dabfdd-operator-scripts podName:a6ec6fcd-3470-4626-a408-92e100dabfdd nodeName:}" failed. No retries permitted until 2026-01-30 22:04:46.697587317 +0000 UTC m=+1287.583345342 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a6ec6fcd-3470-4626-a408-92e100dabfdd-operator-scripts") pod "keystone431c-account-delete-vlh6j" (UID: "a6ec6fcd-3470-4626-a408-92e100dabfdd") : configmap "openstack-scripts" not found Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.844417 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql"] Jan 30 22:04:44 crc kubenswrapper[4869]: I0130 22:04:44.844700 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" podUID="13c3e212-4606-4375-96dd-b1fcf8a40d94" containerName="manager" containerID="cri-o://ede01769a5c417d7e27ca40d07173200b5707dbe5b97a96a586631e76bc31222" gracePeriod=10 Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.056090 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/openstack-galera-1" podUID="1f14ed92-142f-4e45-8be8-d60ab70d051a" containerName="galera" containerID="cri-o://80160e830354f455ae426d7b3382598dc23003f5f59d64d9953def91c3ca532f" gracePeriod=28 Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.090717 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/cinder-operator-index-6ggmv"] Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.091126 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/cinder-operator-index-6ggmv" podUID="8c7d59e0-6997-436f-b17d-67e8d1c0f319" containerName="registry-server" containerID="cri-o://034489ba30fbae8c9a8b4d8c6a003b0f6087ad084656ae62876b438e79e2c1a1" gracePeriod=30 Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.145175 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7"] Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.151180 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/e9fa3d5a7d9551a560d01de9ea70a85b1277acce4c8214021549f3bcd5jkkh7"] Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.188300 4869 generic.go:334] "Generic (PLEG): container finished" podID="a6ec6fcd-3470-4626-a408-92e100dabfdd" containerID="82afe6df4c3365282dfad41a1e1f0ef259b351fed19b4196f7632736c419ea5a" exitCode=1 Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.188873 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" event={"ID":"a6ec6fcd-3470-4626-a408-92e100dabfdd","Type":"ContainerDied","Data":"82afe6df4c3365282dfad41a1e1f0ef259b351fed19b4196f7632736c419ea5a"} Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.188940 4869 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" secret="" err="secret \"galera-openstack-dockercfg-6shmm\" not found" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.188943 4869 scope.go:117] "RemoveContainer" containerID="86ba16aa21d004e3054400c862c2312fc56e70147d4507654c0ad6b8fe2cdbb0" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.189246 4869 scope.go:117] "RemoveContainer" containerID="82afe6df4c3365282dfad41a1e1f0ef259b351fed19b4196f7632736c419ea5a" Jan 30 22:04:45 crc kubenswrapper[4869]: E0130 22:04:45.189689 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-delete\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-delete pod=keystone431c-account-delete-vlh6j_cinder-kuttl-tests(a6ec6fcd-3470-4626-a408-92e100dabfdd)\"" pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" podUID="a6ec6fcd-3470-4626-a408-92e100dabfdd" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.201942 4869 generic.go:334] "Generic (PLEG): container finished" podID="7e27b23c-3307-49b1-93be-8188ed81865f" containerID="8018f5a2698d9d5047d7754e3fee6586b0f67a2a8d2166d8b425354078600dc0" exitCode=0 Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.202048 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/memcached-0" event={"ID":"7e27b23c-3307-49b1-93be-8188ed81865f","Type":"ContainerDied","Data":"8018f5a2698d9d5047d7754e3fee6586b0f67a2a8d2166d8b425354078600dc0"} Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.219769 4869 generic.go:334] "Generic (PLEG): container finished" podID="44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce" containerID="75f1036fa6f0f1ff3b7576bfb9f5e9ac887e0a12fa3839de2963ceac60f4410e" exitCode=0 Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.219829 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" event={"ID":"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce","Type":"ContainerDied","Data":"75f1036fa6f0f1ff3b7576bfb9f5e9ac887e0a12fa3839de2963ceac60f4410e"} Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.221834 4869 generic.go:334] "Generic (PLEG): container finished" podID="13c3e212-4606-4375-96dd-b1fcf8a40d94" containerID="ede01769a5c417d7e27ca40d07173200b5707dbe5b97a96a586631e76bc31222" exitCode=0 Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.221886 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" event={"ID":"13c3e212-4606-4375-96dd-b1fcf8a40d94","Type":"ContainerDied","Data":"ede01769a5c417d7e27ca40d07173200b5707dbe5b97a96a586631e76bc31222"} Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.256255 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/memcached-0" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.416224 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x4vn\" (UniqueName: \"kubernetes.io/projected/7e27b23c-3307-49b1-93be-8188ed81865f-kube-api-access-8x4vn\") pod \"7e27b23c-3307-49b1-93be-8188ed81865f\" (UID: \"7e27b23c-3307-49b1-93be-8188ed81865f\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.416337 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7e27b23c-3307-49b1-93be-8188ed81865f-kolla-config\") pod \"7e27b23c-3307-49b1-93be-8188ed81865f\" (UID: \"7e27b23c-3307-49b1-93be-8188ed81865f\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.417138 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e27b23c-3307-49b1-93be-8188ed81865f-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "7e27b23c-3307-49b1-93be-8188ed81865f" (UID: "7e27b23c-3307-49b1-93be-8188ed81865f"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.417604 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e27b23c-3307-49b1-93be-8188ed81865f-config-data\") pod \"7e27b23c-3307-49b1-93be-8188ed81865f\" (UID: \"7e27b23c-3307-49b1-93be-8188ed81865f\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.418198 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e27b23c-3307-49b1-93be-8188ed81865f-config-data" (OuterVolumeSpecName: "config-data") pod "7e27b23c-3307-49b1-93be-8188ed81865f" (UID: "7e27b23c-3307-49b1-93be-8188ed81865f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.418301 4869 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7e27b23c-3307-49b1-93be-8188ed81865f-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.433093 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e27b23c-3307-49b1-93be-8188ed81865f-kube-api-access-8x4vn" (OuterVolumeSpecName: "kube-api-access-8x4vn") pod "7e27b23c-3307-49b1-93be-8188ed81865f" (UID: "7e27b23c-3307-49b1-93be-8188ed81865f"). InnerVolumeSpecName "kube-api-access-8x4vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.436226 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.521280 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e27b23c-3307-49b1-93be-8188ed81865f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.521321 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x4vn\" (UniqueName: \"kubernetes.io/projected/7e27b23c-3307-49b1-93be-8188ed81865f-kube-api-access-8x4vn\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.546914 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.588210 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-index-6ggmv" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.623011 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zprrd\" (UniqueName: \"kubernetes.io/projected/13c3e212-4606-4375-96dd-b1fcf8a40d94-kube-api-access-zprrd\") pod \"13c3e212-4606-4375-96dd-b1fcf8a40d94\" (UID: \"13c3e212-4606-4375-96dd-b1fcf8a40d94\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.623111 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pnvt\" (UniqueName: \"kubernetes.io/projected/8c7d59e0-6997-436f-b17d-67e8d1c0f319-kube-api-access-2pnvt\") pod \"8c7d59e0-6997-436f-b17d-67e8d1c0f319\" (UID: \"8c7d59e0-6997-436f-b17d-67e8d1c0f319\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.623176 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13c3e212-4606-4375-96dd-b1fcf8a40d94-apiservice-cert\") pod \"13c3e212-4606-4375-96dd-b1fcf8a40d94\" (UID: \"13c3e212-4606-4375-96dd-b1fcf8a40d94\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.623219 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc68w\" (UniqueName: \"kubernetes.io/projected/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-kube-api-access-qc68w\") pod \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.623409 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13c3e212-4606-4375-96dd-b1fcf8a40d94-webhook-cert\") pod \"13c3e212-4606-4375-96dd-b1fcf8a40d94\" (UID: \"13c3e212-4606-4375-96dd-b1fcf8a40d94\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.623448 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-fernet-keys\") pod \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.623472 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-scripts\") pod \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.630123 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c7d59e0-6997-436f-b17d-67e8d1c0f319-kube-api-access-2pnvt" (OuterVolumeSpecName: "kube-api-access-2pnvt") pod "8c7d59e0-6997-436f-b17d-67e8d1c0f319" (UID: "8c7d59e0-6997-436f-b17d-67e8d1c0f319"). InnerVolumeSpecName "kube-api-access-2pnvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.630124 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce" (UID: "44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.630238 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13c3e212-4606-4375-96dd-b1fcf8a40d94-kube-api-access-zprrd" (OuterVolumeSpecName: "kube-api-access-zprrd") pod "13c3e212-4606-4375-96dd-b1fcf8a40d94" (UID: "13c3e212-4606-4375-96dd-b1fcf8a40d94"). InnerVolumeSpecName "kube-api-access-zprrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.630313 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-scripts" (OuterVolumeSpecName: "scripts") pod "44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce" (UID: "44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.631214 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-kube-api-access-qc68w" (OuterVolumeSpecName: "kube-api-access-qc68w") pod "44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce" (UID: "44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce"). InnerVolumeSpecName "kube-api-access-qc68w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.631328 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13c3e212-4606-4375-96dd-b1fcf8a40d94-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "13c3e212-4606-4375-96dd-b1fcf8a40d94" (UID: "13c3e212-4606-4375-96dd-b1fcf8a40d94"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.632007 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13c3e212-4606-4375-96dd-b1fcf8a40d94-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "13c3e212-4606-4375-96dd-b1fcf8a40d94" (UID: "13c3e212-4606-4375-96dd-b1fcf8a40d94"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.724750 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-config-data\") pod \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.724813 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-credential-keys\") pod \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\" (UID: \"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.725103 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zprrd\" (UniqueName: \"kubernetes.io/projected/13c3e212-4606-4375-96dd-b1fcf8a40d94-kube-api-access-zprrd\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.725125 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pnvt\" (UniqueName: \"kubernetes.io/projected/8c7d59e0-6997-436f-b17d-67e8d1c0f319-kube-api-access-2pnvt\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.725134 4869 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13c3e212-4606-4375-96dd-b1fcf8a40d94-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.725142 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc68w\" (UniqueName: \"kubernetes.io/projected/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-kube-api-access-qc68w\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.725151 4869 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13c3e212-4606-4375-96dd-b1fcf8a40d94-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.725160 4869 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.725168 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.728226 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce" (UID: "44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.742448 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-config-data" (OuterVolumeSpecName: "config-data") pod "44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce" (UID: "44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.786067 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.826316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0118f339-f278-4491-b96c-705ba304b2b1\") pod \"111e74f4-fd99-4f7d-8057-43794129795f\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.826376 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/111e74f4-fd99-4f7d-8057-43794129795f-rabbitmq-erlang-cookie\") pod \"111e74f4-fd99-4f7d-8057-43794129795f\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.826442 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srwkd\" (UniqueName: \"kubernetes.io/projected/111e74f4-fd99-4f7d-8057-43794129795f-kube-api-access-srwkd\") pod \"111e74f4-fd99-4f7d-8057-43794129795f\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.826525 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/111e74f4-fd99-4f7d-8057-43794129795f-pod-info\") pod \"111e74f4-fd99-4f7d-8057-43794129795f\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.826600 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/111e74f4-fd99-4f7d-8057-43794129795f-plugins-conf\") pod \"111e74f4-fd99-4f7d-8057-43794129795f\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.826626 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/111e74f4-fd99-4f7d-8057-43794129795f-rabbitmq-confd\") pod \"111e74f4-fd99-4f7d-8057-43794129795f\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.826640 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/111e74f4-fd99-4f7d-8057-43794129795f-rabbitmq-plugins\") pod \"111e74f4-fd99-4f7d-8057-43794129795f\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.826679 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/111e74f4-fd99-4f7d-8057-43794129795f-erlang-cookie-secret\") pod \"111e74f4-fd99-4f7d-8057-43794129795f\" (UID: \"111e74f4-fd99-4f7d-8057-43794129795f\") " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.827007 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.827031 4869 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.827618 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/111e74f4-fd99-4f7d-8057-43794129795f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "111e74f4-fd99-4f7d-8057-43794129795f" (UID: "111e74f4-fd99-4f7d-8057-43794129795f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.827619 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/111e74f4-fd99-4f7d-8057-43794129795f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "111e74f4-fd99-4f7d-8057-43794129795f" (UID: "111e74f4-fd99-4f7d-8057-43794129795f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.829108 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/111e74f4-fd99-4f7d-8057-43794129795f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "111e74f4-fd99-4f7d-8057-43794129795f" (UID: "111e74f4-fd99-4f7d-8057-43794129795f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.830452 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/111e74f4-fd99-4f7d-8057-43794129795f-pod-info" (OuterVolumeSpecName: "pod-info") pod "111e74f4-fd99-4f7d-8057-43794129795f" (UID: "111e74f4-fd99-4f7d-8057-43794129795f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.830586 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/111e74f4-fd99-4f7d-8057-43794129795f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "111e74f4-fd99-4f7d-8057-43794129795f" (UID: "111e74f4-fd99-4f7d-8057-43794129795f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.843515 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/111e74f4-fd99-4f7d-8057-43794129795f-kube-api-access-srwkd" (OuterVolumeSpecName: "kube-api-access-srwkd") pod "111e74f4-fd99-4f7d-8057-43794129795f" (UID: "111e74f4-fd99-4f7d-8057-43794129795f"). InnerVolumeSpecName "kube-api-access-srwkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.846256 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0118f339-f278-4491-b96c-705ba304b2b1" (OuterVolumeSpecName: "persistence") pod "111e74f4-fd99-4f7d-8057-43794129795f" (UID: "111e74f4-fd99-4f7d-8057-43794129795f"). InnerVolumeSpecName "pvc-0118f339-f278-4491-b96c-705ba304b2b1". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.892798 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50ef1a66-b644-41b1-90cd-0f64a5628e97" path="/var/lib/kubelet/pods/50ef1a66-b644-41b1-90cd-0f64a5628e97/volumes" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.893725 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dd1249b-e125-4034-a274-8cf26b3e9b3a" path="/var/lib/kubelet/pods/7dd1249b-e125-4034-a274-8cf26b3e9b3a/volumes" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.927784 4869 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/111e74f4-fd99-4f7d-8057-43794129795f-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.927811 4869 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/111e74f4-fd99-4f7d-8057-43794129795f-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.927820 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/111e74f4-fd99-4f7d-8057-43794129795f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.927850 4869 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/111e74f4-fd99-4f7d-8057-43794129795f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.927880 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0118f339-f278-4491-b96c-705ba304b2b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0118f339-f278-4491-b96c-705ba304b2b1\") on node \"crc\" " Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.927904 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/111e74f4-fd99-4f7d-8057-43794129795f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.927916 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srwkd\" (UniqueName: \"kubernetes.io/projected/111e74f4-fd99-4f7d-8057-43794129795f-kube-api-access-srwkd\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.937081 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/111e74f4-fd99-4f7d-8057-43794129795f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "111e74f4-fd99-4f7d-8057-43794129795f" (UID: "111e74f4-fd99-4f7d-8057-43794129795f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.943361 4869 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 22:04:45 crc kubenswrapper[4869]: I0130 22:04:45.943497 4869 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0118f339-f278-4491-b96c-705ba304b2b1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0118f339-f278-4491-b96c-705ba304b2b1") on node "crc" Jan 30 22:04:45 crc kubenswrapper[4869]: E0130 22:04:45.984593 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c7d59e0_6997_436f_b17d_67e8d1c0f319.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13c3e212_4606_4375_96dd_b1fcf8a40d94.slice/crio-16d78f623439d3d4f1e89a8dd6ee42c057b28b7af78097295ba0a93dbef92b0f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c7d59e0_6997_436f_b17d_67e8d1c0f319.slice/crio-bad2f4d7ab34c4041df5d9cd153e4f0081a69490b6acd4dcb99897c489726d77\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e27b23c_3307_49b1_93be_8188ed81865f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44dcbfe1_c9ef_44ec_b14f_0c3d1afe14ce.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e27b23c_3307_49b1_93be_8188ed81865f.slice/crio-d40eecb1e528e2056bfb7303d1e5dbb38f81afc3c016767319fdca53b3b51a4c\": RecentStats: unable to find data in memory cache]" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.029257 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/111e74f4-fd99-4f7d-8057-43794129795f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.029303 4869 reconciler_common.go:293] "Volume detached for volume \"pvc-0118f339-f278-4491-b96c-705ba304b2b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0118f339-f278-4491-b96c-705ba304b2b1\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.233519 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.233525 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-5b988b97cc-bzpmz" event={"ID":"44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce","Type":"ContainerDied","Data":"123ffe81d0938e9c86d4b2c1300b04e0b2f0639501d16e0370f3aff48f9c178d"} Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.233997 4869 scope.go:117] "RemoveContainer" containerID="75f1036fa6f0f1ff3b7576bfb9f5e9ac887e0a12fa3839de2963ceac60f4410e" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.235087 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" event={"ID":"13c3e212-4606-4375-96dd-b1fcf8a40d94","Type":"ContainerDied","Data":"16d78f623439d3d4f1e89a8dd6ee42c057b28b7af78097295ba0a93dbef92b0f"} Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.235111 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.237733 4869 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" secret="" err="secret \"galera-openstack-dockercfg-6shmm\" not found" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.237776 4869 scope.go:117] "RemoveContainer" containerID="82afe6df4c3365282dfad41a1e1f0ef259b351fed19b4196f7632736c419ea5a" Jan 30 22:04:46 crc kubenswrapper[4869]: E0130 22:04:46.238042 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-delete\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-delete pod=keystone431c-account-delete-vlh6j_cinder-kuttl-tests(a6ec6fcd-3470-4626-a408-92e100dabfdd)\"" pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" podUID="a6ec6fcd-3470-4626-a408-92e100dabfdd" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.239391 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/memcached-0" event={"ID":"7e27b23c-3307-49b1-93be-8188ed81865f","Type":"ContainerDied","Data":"d40eecb1e528e2056bfb7303d1e5dbb38f81afc3c016767319fdca53b3b51a4c"} Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.239401 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/memcached-0" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.241028 4869 generic.go:334] "Generic (PLEG): container finished" podID="8c7d59e0-6997-436f-b17d-67e8d1c0f319" containerID="034489ba30fbae8c9a8b4d8c6a003b0f6087ad084656ae62876b438e79e2c1a1" exitCode=0 Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.241060 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-index-6ggmv" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.241076 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-index-6ggmv" event={"ID":"8c7d59e0-6997-436f-b17d-67e8d1c0f319","Type":"ContainerDied","Data":"034489ba30fbae8c9a8b4d8c6a003b0f6087ad084656ae62876b438e79e2c1a1"} Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.241092 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-index-6ggmv" event={"ID":"8c7d59e0-6997-436f-b17d-67e8d1c0f319","Type":"ContainerDied","Data":"bad2f4d7ab34c4041df5d9cd153e4f0081a69490b6acd4dcb99897c489726d77"} Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.243524 4869 generic.go:334] "Generic (PLEG): container finished" podID="111e74f4-fd99-4f7d-8057-43794129795f" containerID="34563f83938add6f7bcfa6405c612841cd4dbad688d596ba90fc125ad61fb4be" exitCode=0 Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.243575 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"111e74f4-fd99-4f7d-8057-43794129795f","Type":"ContainerDied","Data":"34563f83938add6f7bcfa6405c612841cd4dbad688d596ba90fc125ad61fb4be"} Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.243598 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.243608 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"111e74f4-fd99-4f7d-8057-43794129795f","Type":"ContainerDied","Data":"42616fa0bb82784791ebfe85956162cfc78ae3ea906e73a8bbbb5bbf33fb67a5"} Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.259795 4869 scope.go:117] "RemoveContainer" containerID="ede01769a5c417d7e27ca40d07173200b5707dbe5b97a96a586631e76bc31222" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.261002 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql"] Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.272023 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-64c8b49677-kb8ql"] Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.287536 4869 scope.go:117] "RemoveContainer" containerID="8018f5a2698d9d5047d7754e3fee6586b0f67a2a8d2166d8b425354078600dc0" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.309136 4869 scope.go:117] "RemoveContainer" containerID="034489ba30fbae8c9a8b4d8c6a003b0f6087ad084656ae62876b438e79e2c1a1" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.312912 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-5b988b97cc-bzpmz"] Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.322621 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-5b988b97cc-bzpmz"] Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.324263 4869 scope.go:117] "RemoveContainer" containerID="034489ba30fbae8c9a8b4d8c6a003b0f6087ad084656ae62876b438e79e2c1a1" Jan 30 22:04:46 crc kubenswrapper[4869]: E0130 22:04:46.324709 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"034489ba30fbae8c9a8b4d8c6a003b0f6087ad084656ae62876b438e79e2c1a1\": container with ID starting with 034489ba30fbae8c9a8b4d8c6a003b0f6087ad084656ae62876b438e79e2c1a1 not found: ID does not exist" containerID="034489ba30fbae8c9a8b4d8c6a003b0f6087ad084656ae62876b438e79e2c1a1" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.324749 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"034489ba30fbae8c9a8b4d8c6a003b0f6087ad084656ae62876b438e79e2c1a1"} err="failed to get container status \"034489ba30fbae8c9a8b4d8c6a003b0f6087ad084656ae62876b438e79e2c1a1\": rpc error: code = NotFound desc = could not find container \"034489ba30fbae8c9a8b4d8c6a003b0f6087ad084656ae62876b438e79e2c1a1\": container with ID starting with 034489ba30fbae8c9a8b4d8c6a003b0f6087ad084656ae62876b438e79e2c1a1 not found: ID does not exist" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.324774 4869 scope.go:117] "RemoveContainer" containerID="34563f83938add6f7bcfa6405c612841cd4dbad688d596ba90fc125ad61fb4be" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.335251 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.341086 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.349157 4869 scope.go:117] "RemoveContainer" containerID="cd4655fc40703668ded911e78b28ac16c883c993e9883ba3dc852f674cd2d7f3" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.350266 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/cinder-operator-index-6ggmv"] Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.360104 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/cinder-operator-index-6ggmv"] Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.365857 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.379886 4869 scope.go:117] "RemoveContainer" containerID="34563f83938add6f7bcfa6405c612841cd4dbad688d596ba90fc125ad61fb4be" Jan 30 22:04:46 crc kubenswrapper[4869]: E0130 22:04:46.380296 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34563f83938add6f7bcfa6405c612841cd4dbad688d596ba90fc125ad61fb4be\": container with ID starting with 34563f83938add6f7bcfa6405c612841cd4dbad688d596ba90fc125ad61fb4be not found: ID does not exist" containerID="34563f83938add6f7bcfa6405c612841cd4dbad688d596ba90fc125ad61fb4be" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.380367 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34563f83938add6f7bcfa6405c612841cd4dbad688d596ba90fc125ad61fb4be"} err="failed to get container status \"34563f83938add6f7bcfa6405c612841cd4dbad688d596ba90fc125ad61fb4be\": rpc error: code = NotFound desc = could not find container \"34563f83938add6f7bcfa6405c612841cd4dbad688d596ba90fc125ad61fb4be\": container with ID starting with 34563f83938add6f7bcfa6405c612841cd4dbad688d596ba90fc125ad61fb4be not found: ID does not exist" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.380404 4869 scope.go:117] "RemoveContainer" containerID="cd4655fc40703668ded911e78b28ac16c883c993e9883ba3dc852f674cd2d7f3" Jan 30 22:04:46 crc kubenswrapper[4869]: E0130 22:04:46.380840 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd4655fc40703668ded911e78b28ac16c883c993e9883ba3dc852f674cd2d7f3\": container with ID starting with cd4655fc40703668ded911e78b28ac16c883c993e9883ba3dc852f674cd2d7f3 not found: ID does not exist" containerID="cd4655fc40703668ded911e78b28ac16c883c993e9883ba3dc852f674cd2d7f3" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.380914 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd4655fc40703668ded911e78b28ac16c883c993e9883ba3dc852f674cd2d7f3"} err="failed to get container status \"cd4655fc40703668ded911e78b28ac16c883c993e9883ba3dc852f674cd2d7f3\": rpc error: code = NotFound desc = could not find container \"cd4655fc40703668ded911e78b28ac16c883c993e9883ba3dc852f674cd2d7f3\": container with ID starting with cd4655fc40703668ded911e78b28ac16c883c993e9883ba3dc852f674cd2d7f3 not found: ID does not exist" Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.381194 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.636312 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz"] Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.636574 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" podUID="ca2d32a5-f5a1-4e59-908b-0d55de2c600f" containerName="manager" containerID="cri-o://66b7c51833dc4cc4fa3068fae54acfb8fe3239b0a2f01451cdac49df35cbff4b" gracePeriod=10 Jan 30 22:04:46 crc kubenswrapper[4869]: E0130 22:04:46.738458 4869 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 30 22:04:46 crc kubenswrapper[4869]: E0130 22:04:46.738529 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a6ec6fcd-3470-4626-a408-92e100dabfdd-operator-scripts podName:a6ec6fcd-3470-4626-a408-92e100dabfdd nodeName:}" failed. No retries permitted until 2026-01-30 22:04:50.738514591 +0000 UTC m=+1291.624272616 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a6ec6fcd-3470-4626-a408-92e100dabfdd-operator-scripts") pod "keystone431c-account-delete-vlh6j" (UID: "a6ec6fcd-3470-4626-a408-92e100dabfdd") : configmap "openstack-scripts" not found Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.862930 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-9pzmz"] Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.863168 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-index-9pzmz" podUID="29afe53d-3124-4365-ab6a-abe5b7630a4b" containerName="registry-server" containerID="cri-o://991abdb178acfbdc0da2d80bb25d10561ca93133022efa522b61f9e61b654835" gracePeriod=30 Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.904438 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw"] Jan 30 22:04:46 crc kubenswrapper[4869]: I0130 22:04:46.913205 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqccnw"] Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.046619 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.047572 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/openstack-galera-0" podUID="3ba1f617-3e3a-4d7c-9374-ed5a271550d3" containerName="galera" containerID="cri-o://5de45bcb824ff5c4c497841eb42b64600d45ee772a7559447c29f2e805c89361" gracePeriod=26 Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.079238 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-db-create-2wgdn"] Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.106351 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-db-create-2wgdn"] Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.127049 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-431c-account-create-update-4fcmg"] Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.138040 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-431c-account-create-update-4fcmg"] Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.152338 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.157174 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone431c-account-delete-vlh6j"] Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.244707 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7hbd\" (UniqueName: \"kubernetes.io/projected/1f14ed92-142f-4e45-8be8-d60ab70d051a-kube-api-access-w7hbd\") pod \"1f14ed92-142f-4e45-8be8-d60ab70d051a\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.244758 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f14ed92-142f-4e45-8be8-d60ab70d051a-operator-scripts\") pod \"1f14ed92-142f-4e45-8be8-d60ab70d051a\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.244843 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1f14ed92-142f-4e45-8be8-d60ab70d051a-config-data-generated\") pod \"1f14ed92-142f-4e45-8be8-d60ab70d051a\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.244933 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1f14ed92-142f-4e45-8be8-d60ab70d051a-kolla-config\") pod \"1f14ed92-142f-4e45-8be8-d60ab70d051a\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.244949 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"1f14ed92-142f-4e45-8be8-d60ab70d051a\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.244976 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1f14ed92-142f-4e45-8be8-d60ab70d051a-config-data-default\") pod \"1f14ed92-142f-4e45-8be8-d60ab70d051a\" (UID: \"1f14ed92-142f-4e45-8be8-d60ab70d051a\") " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.245415 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f14ed92-142f-4e45-8be8-d60ab70d051a-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "1f14ed92-142f-4e45-8be8-d60ab70d051a" (UID: "1f14ed92-142f-4e45-8be8-d60ab70d051a"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.245439 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f14ed92-142f-4e45-8be8-d60ab70d051a-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "1f14ed92-142f-4e45-8be8-d60ab70d051a" (UID: "1f14ed92-142f-4e45-8be8-d60ab70d051a"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.245571 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f14ed92-142f-4e45-8be8-d60ab70d051a-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "1f14ed92-142f-4e45-8be8-d60ab70d051a" (UID: "1f14ed92-142f-4e45-8be8-d60ab70d051a"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.247960 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f14ed92-142f-4e45-8be8-d60ab70d051a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f14ed92-142f-4e45-8be8-d60ab70d051a" (UID: "1f14ed92-142f-4e45-8be8-d60ab70d051a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.250080 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f14ed92-142f-4e45-8be8-d60ab70d051a-kube-api-access-w7hbd" (OuterVolumeSpecName: "kube-api-access-w7hbd") pod "1f14ed92-142f-4e45-8be8-d60ab70d051a" (UID: "1f14ed92-142f-4e45-8be8-d60ab70d051a"). InnerVolumeSpecName "kube-api-access-w7hbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.251062 4869 generic.go:334] "Generic (PLEG): container finished" podID="1f14ed92-142f-4e45-8be8-d60ab70d051a" containerID="80160e830354f455ae426d7b3382598dc23003f5f59d64d9953def91c3ca532f" exitCode=0 Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.251123 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"1f14ed92-142f-4e45-8be8-d60ab70d051a","Type":"ContainerDied","Data":"80160e830354f455ae426d7b3382598dc23003f5f59d64d9953def91c3ca532f"} Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.251153 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"1f14ed92-142f-4e45-8be8-d60ab70d051a","Type":"ContainerDied","Data":"a19457677783f04a2c925af6cbe77260d724badd615532e6d323152c1ce425d8"} Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.251173 4869 scope.go:117] "RemoveContainer" containerID="80160e830354f455ae426d7b3382598dc23003f5f59d64d9953def91c3ca532f" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.251272 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-1" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.255218 4869 generic.go:334] "Generic (PLEG): container finished" podID="ca2d32a5-f5a1-4e59-908b-0d55de2c600f" containerID="66b7c51833dc4cc4fa3068fae54acfb8fe3239b0a2f01451cdac49df35cbff4b" exitCode=0 Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.255269 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" event={"ID":"ca2d32a5-f5a1-4e59-908b-0d55de2c600f","Type":"ContainerDied","Data":"66b7c51833dc4cc4fa3068fae54acfb8fe3239b0a2f01451cdac49df35cbff4b"} Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.255290 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" event={"ID":"ca2d32a5-f5a1-4e59-908b-0d55de2c600f","Type":"ContainerDied","Data":"f5b040999c3cde9d5db74daac90a47c96b41b5501bf4a80e42d25b80d83e52d3"} Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.255326 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.257268 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "mysql-db") pod "1f14ed92-142f-4e45-8be8-d60ab70d051a" (UID: "1f14ed92-142f-4e45-8be8-d60ab70d051a"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.261388 4869 generic.go:334] "Generic (PLEG): container finished" podID="29afe53d-3124-4365-ab6a-abe5b7630a4b" containerID="991abdb178acfbdc0da2d80bb25d10561ca93133022efa522b61f9e61b654835" exitCode=0 Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.261439 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-9pzmz" event={"ID":"29afe53d-3124-4365-ab6a-abe5b7630a4b","Type":"ContainerDied","Data":"991abdb178acfbdc0da2d80bb25d10561ca93133022efa522b61f9e61b654835"} Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.267079 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-9pzmz" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.303782 4869 scope.go:117] "RemoveContainer" containerID="4ec24aaf084d421fbeda3a5af60f542f56071fd93e4494ef2393cfa41e1d3ebc" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.346348 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9nwl\" (UniqueName: \"kubernetes.io/projected/ca2d32a5-f5a1-4e59-908b-0d55de2c600f-kube-api-access-c9nwl\") pod \"ca2d32a5-f5a1-4e59-908b-0d55de2c600f\" (UID: \"ca2d32a5-f5a1-4e59-908b-0d55de2c600f\") " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.346595 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca2d32a5-f5a1-4e59-908b-0d55de2c600f-apiservice-cert\") pod \"ca2d32a5-f5a1-4e59-908b-0d55de2c600f\" (UID: \"ca2d32a5-f5a1-4e59-908b-0d55de2c600f\") " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.346714 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca2d32a5-f5a1-4e59-908b-0d55de2c600f-webhook-cert\") pod \"ca2d32a5-f5a1-4e59-908b-0d55de2c600f\" (UID: \"ca2d32a5-f5a1-4e59-908b-0d55de2c600f\") " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.347062 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1f14ed92-142f-4e45-8be8-d60ab70d051a-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.347124 4869 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1f14ed92-142f-4e45-8be8-d60ab70d051a-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.347197 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.347293 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1f14ed92-142f-4e45-8be8-d60ab70d051a-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.347379 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7hbd\" (UniqueName: \"kubernetes.io/projected/1f14ed92-142f-4e45-8be8-d60ab70d051a-kube-api-access-w7hbd\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.347447 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f14ed92-142f-4e45-8be8-d60ab70d051a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.356401 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca2d32a5-f5a1-4e59-908b-0d55de2c600f-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "ca2d32a5-f5a1-4e59-908b-0d55de2c600f" (UID: "ca2d32a5-f5a1-4e59-908b-0d55de2c600f"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.357098 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca2d32a5-f5a1-4e59-908b-0d55de2c600f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "ca2d32a5-f5a1-4e59-908b-0d55de2c600f" (UID: "ca2d32a5-f5a1-4e59-908b-0d55de2c600f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.373791 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.400111 4869 scope.go:117] "RemoveContainer" containerID="80160e830354f455ae426d7b3382598dc23003f5f59d64d9953def91c3ca532f" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.400854 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2d32a5-f5a1-4e59-908b-0d55de2c600f-kube-api-access-c9nwl" (OuterVolumeSpecName: "kube-api-access-c9nwl") pod "ca2d32a5-f5a1-4e59-908b-0d55de2c600f" (UID: "ca2d32a5-f5a1-4e59-908b-0d55de2c600f"). InnerVolumeSpecName "kube-api-access-c9nwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:47 crc kubenswrapper[4869]: E0130 22:04:47.404096 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80160e830354f455ae426d7b3382598dc23003f5f59d64d9953def91c3ca532f\": container with ID starting with 80160e830354f455ae426d7b3382598dc23003f5f59d64d9953def91c3ca532f not found: ID does not exist" containerID="80160e830354f455ae426d7b3382598dc23003f5f59d64d9953def91c3ca532f" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.404158 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80160e830354f455ae426d7b3382598dc23003f5f59d64d9953def91c3ca532f"} err="failed to get container status \"80160e830354f455ae426d7b3382598dc23003f5f59d64d9953def91c3ca532f\": rpc error: code = NotFound desc = could not find container \"80160e830354f455ae426d7b3382598dc23003f5f59d64d9953def91c3ca532f\": container with ID starting with 80160e830354f455ae426d7b3382598dc23003f5f59d64d9953def91c3ca532f not found: ID does not exist" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.404190 4869 scope.go:117] "RemoveContainer" containerID="4ec24aaf084d421fbeda3a5af60f542f56071fd93e4494ef2393cfa41e1d3ebc" Jan 30 22:04:47 crc kubenswrapper[4869]: E0130 22:04:47.404701 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ec24aaf084d421fbeda3a5af60f542f56071fd93e4494ef2393cfa41e1d3ebc\": container with ID starting with 4ec24aaf084d421fbeda3a5af60f542f56071fd93e4494ef2393cfa41e1d3ebc not found: ID does not exist" containerID="4ec24aaf084d421fbeda3a5af60f542f56071fd93e4494ef2393cfa41e1d3ebc" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.404756 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ec24aaf084d421fbeda3a5af60f542f56071fd93e4494ef2393cfa41e1d3ebc"} err="failed to get container status \"4ec24aaf084d421fbeda3a5af60f542f56071fd93e4494ef2393cfa41e1d3ebc\": rpc error: code = NotFound desc = could not find container \"4ec24aaf084d421fbeda3a5af60f542f56071fd93e4494ef2393cfa41e1d3ebc\": container with ID starting with 4ec24aaf084d421fbeda3a5af60f542f56071fd93e4494ef2393cfa41e1d3ebc not found: ID does not exist" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.404785 4869 scope.go:117] "RemoveContainer" containerID="66b7c51833dc4cc4fa3068fae54acfb8fe3239b0a2f01451cdac49df35cbff4b" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.451664 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9bkv\" (UniqueName: \"kubernetes.io/projected/29afe53d-3124-4365-ab6a-abe5b7630a4b-kube-api-access-c9bkv\") pod \"29afe53d-3124-4365-ab6a-abe5b7630a4b\" (UID: \"29afe53d-3124-4365-ab6a-abe5b7630a4b\") " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.452273 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.452357 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9nwl\" (UniqueName: \"kubernetes.io/projected/ca2d32a5-f5a1-4e59-908b-0d55de2c600f-kube-api-access-c9nwl\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.452460 4869 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca2d32a5-f5a1-4e59-908b-0d55de2c600f-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.452518 4869 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca2d32a5-f5a1-4e59-908b-0d55de2c600f-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.453617 4869 scope.go:117] "RemoveContainer" containerID="66b7c51833dc4cc4fa3068fae54acfb8fe3239b0a2f01451cdac49df35cbff4b" Jan 30 22:04:47 crc kubenswrapper[4869]: E0130 22:04:47.456783 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66b7c51833dc4cc4fa3068fae54acfb8fe3239b0a2f01451cdac49df35cbff4b\": container with ID starting with 66b7c51833dc4cc4fa3068fae54acfb8fe3239b0a2f01451cdac49df35cbff4b not found: ID does not exist" containerID="66b7c51833dc4cc4fa3068fae54acfb8fe3239b0a2f01451cdac49df35cbff4b" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.456815 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66b7c51833dc4cc4fa3068fae54acfb8fe3239b0a2f01451cdac49df35cbff4b"} err="failed to get container status \"66b7c51833dc4cc4fa3068fae54acfb8fe3239b0a2f01451cdac49df35cbff4b\": rpc error: code = NotFound desc = could not find container \"66b7c51833dc4cc4fa3068fae54acfb8fe3239b0a2f01451cdac49df35cbff4b\": container with ID starting with 66b7c51833dc4cc4fa3068fae54acfb8fe3239b0a2f01451cdac49df35cbff4b not found: ID does not exist" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.458102 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29afe53d-3124-4365-ab6a-abe5b7630a4b-kube-api-access-c9bkv" (OuterVolumeSpecName: "kube-api-access-c9bkv") pod "29afe53d-3124-4365-ab6a-abe5b7630a4b" (UID: "29afe53d-3124-4365-ab6a-abe5b7630a4b"). InnerVolumeSpecName "kube-api-access-c9bkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.564445 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9bkv\" (UniqueName: \"kubernetes.io/projected/29afe53d-3124-4365-ab6a-abe5b7630a4b-kube-api-access-c9bkv\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.574319 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.586470 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.593192 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.612387 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz"] Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.631014 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5f98c88f68-6xdqz"] Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.665454 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ec6fcd-3470-4626-a408-92e100dabfdd-operator-scripts\") pod \"a6ec6fcd-3470-4626-a408-92e100dabfdd\" (UID: \"a6ec6fcd-3470-4626-a408-92e100dabfdd\") " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.665776 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhj5p\" (UniqueName: \"kubernetes.io/projected/a6ec6fcd-3470-4626-a408-92e100dabfdd-kube-api-access-hhj5p\") pod \"a6ec6fcd-3470-4626-a408-92e100dabfdd\" (UID: \"a6ec6fcd-3470-4626-a408-92e100dabfdd\") " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.666259 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6ec6fcd-3470-4626-a408-92e100dabfdd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6ec6fcd-3470-4626-a408-92e100dabfdd" (UID: "a6ec6fcd-3470-4626-a408-92e100dabfdd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.668413 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6ec6fcd-3470-4626-a408-92e100dabfdd-kube-api-access-hhj5p" (OuterVolumeSpecName: "kube-api-access-hhj5p") pod "a6ec6fcd-3470-4626-a408-92e100dabfdd" (UID: "a6ec6fcd-3470-4626-a408-92e100dabfdd"). InnerVolumeSpecName "kube-api-access-hhj5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.767674 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ec6fcd-3470-4626-a408-92e100dabfdd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.767717 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhj5p\" (UniqueName: \"kubernetes.io/projected/a6ec6fcd-3470-4626-a408-92e100dabfdd-kube-api-access-hhj5p\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.884521 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="111e74f4-fd99-4f7d-8057-43794129795f" path="/var/lib/kubelet/pods/111e74f4-fd99-4f7d-8057-43794129795f/volumes" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.885068 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13c3e212-4606-4375-96dd-b1fcf8a40d94" path="/var/lib/kubelet/pods/13c3e212-4606-4375-96dd-b1fcf8a40d94/volumes" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.885566 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f14ed92-142f-4e45-8be8-d60ab70d051a" path="/var/lib/kubelet/pods/1f14ed92-142f-4e45-8be8-d60ab70d051a/volumes" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.886496 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f76026f-41ae-4897-a348-ac4c49c2c2c5" path="/var/lib/kubelet/pods/3f76026f-41ae-4897-a348-ac4c49c2c2c5/volumes" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.886959 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="448717a9-d5d0-47dc-9b41-ffbefbdbb175" path="/var/lib/kubelet/pods/448717a9-d5d0-47dc-9b41-ffbefbdbb175/volumes" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.887396 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce" path="/var/lib/kubelet/pods/44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce/volumes" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.888218 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e27b23c-3307-49b1-93be-8188ed81865f" path="/var/lib/kubelet/pods/7e27b23c-3307-49b1-93be-8188ed81865f/volumes" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.888635 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c7d59e0-6997-436f-b17d-67e8d1c0f319" path="/var/lib/kubelet/pods/8c7d59e0-6997-436f-b17d-67e8d1c0f319/volumes" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.889090 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad3d5522-788a-47a8-8f82-cab1a12966ad" path="/var/lib/kubelet/pods/ad3d5522-788a-47a8-8f82-cab1a12966ad/volumes" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.890054 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2d32a5-f5a1-4e59-908b-0d55de2c600f" path="/var/lib/kubelet/pods/ca2d32a5-f5a1-4e59-908b-0d55de2c600f/volumes" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.927037 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.970576 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-config-data-default\") pod \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.970666 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-operator-scripts\") pod \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.970713 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.970753 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rl4f\" (UniqueName: \"kubernetes.io/projected/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-kube-api-access-6rl4f\") pod \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.970803 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-kolla-config\") pod \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.970871 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-config-data-generated\") pod \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\" (UID: \"3ba1f617-3e3a-4d7c-9374-ed5a271550d3\") " Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.971201 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "3ba1f617-3e3a-4d7c-9374-ed5a271550d3" (UID: "3ba1f617-3e3a-4d7c-9374-ed5a271550d3"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.971552 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.971745 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3ba1f617-3e3a-4d7c-9374-ed5a271550d3" (UID: "3ba1f617-3e3a-4d7c-9374-ed5a271550d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.972083 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "3ba1f617-3e3a-4d7c-9374-ed5a271550d3" (UID: "3ba1f617-3e3a-4d7c-9374-ed5a271550d3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.974841 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-kube-api-access-6rl4f" (OuterVolumeSpecName: "kube-api-access-6rl4f") pod "3ba1f617-3e3a-4d7c-9374-ed5a271550d3" (UID: "3ba1f617-3e3a-4d7c-9374-ed5a271550d3"). InnerVolumeSpecName "kube-api-access-6rl4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.977110 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "3ba1f617-3e3a-4d7c-9374-ed5a271550d3" (UID: "3ba1f617-3e3a-4d7c-9374-ed5a271550d3"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:04:47 crc kubenswrapper[4869]: I0130 22:04:47.979563 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "mysql-db") pod "3ba1f617-3e3a-4d7c-9374-ed5a271550d3" (UID: "3ba1f617-3e3a-4d7c-9374-ed5a271550d3"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.072965 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.073025 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.073040 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rl4f\" (UniqueName: \"kubernetes.io/projected/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-kube-api-access-6rl4f\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.073053 4869 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.073062 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3ba1f617-3e3a-4d7c-9374-ed5a271550d3-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.084079 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.180168 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.279918 4869 generic.go:334] "Generic (PLEG): container finished" podID="3ba1f617-3e3a-4d7c-9374-ed5a271550d3" containerID="5de45bcb824ff5c4c497841eb42b64600d45ee772a7559447c29f2e805c89361" exitCode=0 Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.280141 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-0" Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.280210 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"3ba1f617-3e3a-4d7c-9374-ed5a271550d3","Type":"ContainerDied","Data":"5de45bcb824ff5c4c497841eb42b64600d45ee772a7559447c29f2e805c89361"} Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.280244 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"3ba1f617-3e3a-4d7c-9374-ed5a271550d3","Type":"ContainerDied","Data":"6038188a1d9be5ac66be7ae3803d8e9c27f2e2f9e8e1c08d20802fef9bf66524"} Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.280374 4869 scope.go:117] "RemoveContainer" containerID="5de45bcb824ff5c4c497841eb42b64600d45ee772a7559447c29f2e805c89361" Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.290629 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" event={"ID":"a6ec6fcd-3470-4626-a408-92e100dabfdd","Type":"ContainerDied","Data":"e6a00b1a0fa4c3de544402438f3ad26e453450dcec20228231cfa4a13850a860"} Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.295949 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-9pzmz" event={"ID":"29afe53d-3124-4365-ab6a-abe5b7630a4b","Type":"ContainerDied","Data":"1ada8ac0847e2bcfafedf0759284be1ef10471bb8261da088ea27dfa0ebf0ce4"} Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.296090 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-9pzmz" Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.301946 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone431c-account-delete-vlh6j" Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.314243 4869 scope.go:117] "RemoveContainer" containerID="d70033fca62370f592e349e86ba9143accfc897a1c8b5e66f6cdb064a04cf039" Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.333017 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-9pzmz"] Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.338204 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-index-9pzmz"] Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.340374 4869 scope.go:117] "RemoveContainer" containerID="5de45bcb824ff5c4c497841eb42b64600d45ee772a7559447c29f2e805c89361" Jan 30 22:04:48 crc kubenswrapper[4869]: E0130 22:04:48.340829 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5de45bcb824ff5c4c497841eb42b64600d45ee772a7559447c29f2e805c89361\": container with ID starting with 5de45bcb824ff5c4c497841eb42b64600d45ee772a7559447c29f2e805c89361 not found: ID does not exist" containerID="5de45bcb824ff5c4c497841eb42b64600d45ee772a7559447c29f2e805c89361" Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.340873 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de45bcb824ff5c4c497841eb42b64600d45ee772a7559447c29f2e805c89361"} err="failed to get container status \"5de45bcb824ff5c4c497841eb42b64600d45ee772a7559447c29f2e805c89361\": rpc error: code = NotFound desc = could not find container \"5de45bcb824ff5c4c497841eb42b64600d45ee772a7559447c29f2e805c89361\": container with ID starting with 5de45bcb824ff5c4c497841eb42b64600d45ee772a7559447c29f2e805c89361 not found: ID does not exist" Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.340900 4869 scope.go:117] "RemoveContainer" containerID="d70033fca62370f592e349e86ba9143accfc897a1c8b5e66f6cdb064a04cf039" Jan 30 22:04:48 crc kubenswrapper[4869]: E0130 22:04:48.341233 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d70033fca62370f592e349e86ba9143accfc897a1c8b5e66f6cdb064a04cf039\": container with ID starting with d70033fca62370f592e349e86ba9143accfc897a1c8b5e66f6cdb064a04cf039 not found: ID does not exist" containerID="d70033fca62370f592e349e86ba9143accfc897a1c8b5e66f6cdb064a04cf039" Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.341278 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d70033fca62370f592e349e86ba9143accfc897a1c8b5e66f6cdb064a04cf039"} err="failed to get container status \"d70033fca62370f592e349e86ba9143accfc897a1c8b5e66f6cdb064a04cf039\": rpc error: code = NotFound desc = could not find container \"d70033fca62370f592e349e86ba9143accfc897a1c8b5e66f6cdb064a04cf039\": container with ID starting with d70033fca62370f592e349e86ba9143accfc897a1c8b5e66f6cdb064a04cf039 not found: ID does not exist" Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.341305 4869 scope.go:117] "RemoveContainer" containerID="82afe6df4c3365282dfad41a1e1f0ef259b351fed19b4196f7632736c419ea5a" Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.348203 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.354046 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.358288 4869 scope.go:117] "RemoveContainer" containerID="991abdb178acfbdc0da2d80bb25d10561ca93133022efa522b61f9e61b654835" Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.367001 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone431c-account-delete-vlh6j"] Jan 30 22:04:48 crc kubenswrapper[4869]: I0130 22:04:48.371699 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone431c-account-delete-vlh6j"] Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.013197 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t"] Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.013398 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t" podUID="20d7cc6e-b969-4768-978d-534adef89f4f" containerName="operator" containerID="cri-o://f58320dfca0855f69000bfd485396cf9d75828010486072e1ed4b7c66ac53c9c" gracePeriod=10 Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.302034 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-rzlf9"] Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.302319 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-rzlf9" podUID="de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a" containerName="registry-server" containerID="cri-o://937d11294624b3c3ef6e8838dcb5b80e6352781b7ec1a83ca26d3031998997a6" gracePeriod=30 Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.328254 4869 generic.go:334] "Generic (PLEG): container finished" podID="20d7cc6e-b969-4768-978d-534adef89f4f" containerID="f58320dfca0855f69000bfd485396cf9d75828010486072e1ed4b7c66ac53c9c" exitCode=0 Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.328332 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t" event={"ID":"20d7cc6e-b969-4768-978d-534adef89f4f","Type":"ContainerDied","Data":"f58320dfca0855f69000bfd485396cf9d75828010486072e1ed4b7c66ac53c9c"} Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.343093 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt"] Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.351127 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590sl5nt"] Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.444940 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t" Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.496367 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj52n\" (UniqueName: \"kubernetes.io/projected/20d7cc6e-b969-4768-978d-534adef89f4f-kube-api-access-zj52n\") pod \"20d7cc6e-b969-4768-978d-534adef89f4f\" (UID: \"20d7cc6e-b969-4768-978d-534adef89f4f\") " Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.503981 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d7cc6e-b969-4768-978d-534adef89f4f-kube-api-access-zj52n" (OuterVolumeSpecName: "kube-api-access-zj52n") pod "20d7cc6e-b969-4768-978d-534adef89f4f" (UID: "20d7cc6e-b969-4768-978d-534adef89f4f"). InnerVolumeSpecName "kube-api-access-zj52n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.598334 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj52n\" (UniqueName: \"kubernetes.io/projected/20d7cc6e-b969-4768-978d-534adef89f4f-kube-api-access-zj52n\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.728105 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-rzlf9" Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.800668 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxtk2\" (UniqueName: \"kubernetes.io/projected/de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a-kube-api-access-sxtk2\") pod \"de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a\" (UID: \"de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a\") " Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.804009 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a-kube-api-access-sxtk2" (OuterVolumeSpecName: "kube-api-access-sxtk2") pod "de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a" (UID: "de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a"). InnerVolumeSpecName "kube-api-access-sxtk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.890370 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29afe53d-3124-4365-ab6a-abe5b7630a4b" path="/var/lib/kubelet/pods/29afe53d-3124-4365-ab6a-abe5b7630a4b/volumes" Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.890935 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ba1f617-3e3a-4d7c-9374-ed5a271550d3" path="/var/lib/kubelet/pods/3ba1f617-3e3a-4d7c-9374-ed5a271550d3/volumes" Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.891663 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6ec6fcd-3470-4626-a408-92e100dabfdd" path="/var/lib/kubelet/pods/a6ec6fcd-3470-4626-a408-92e100dabfdd/volumes" Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.892597 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bafac8d6-5853-4f41-a2e0-b24fbd2a533d" path="/var/lib/kubelet/pods/bafac8d6-5853-4f41-a2e0-b24fbd2a533d/volumes" Jan 30 22:04:49 crc kubenswrapper[4869]: I0130 22:04:49.901743 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxtk2\" (UniqueName: \"kubernetes.io/projected/de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a-kube-api-access-sxtk2\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:50 crc kubenswrapper[4869]: I0130 22:04:50.336525 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t" event={"ID":"20d7cc6e-b969-4768-978d-534adef89f4f","Type":"ContainerDied","Data":"2c4749c6b00b7c28de5f27cccaf697d8fe11759570fc014ceba72eca750527ce"} Jan 30 22:04:50 crc kubenswrapper[4869]: I0130 22:04:50.336575 4869 scope.go:117] "RemoveContainer" containerID="f58320dfca0855f69000bfd485396cf9d75828010486072e1ed4b7c66ac53c9c" Jan 30 22:04:50 crc kubenswrapper[4869]: I0130 22:04:50.336655 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t" Jan 30 22:04:50 crc kubenswrapper[4869]: I0130 22:04:50.339466 4869 generic.go:334] "Generic (PLEG): container finished" podID="de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a" containerID="937d11294624b3c3ef6e8838dcb5b80e6352781b7ec1a83ca26d3031998997a6" exitCode=0 Jan 30 22:04:50 crc kubenswrapper[4869]: I0130 22:04:50.339802 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-rzlf9" event={"ID":"de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a","Type":"ContainerDied","Data":"937d11294624b3c3ef6e8838dcb5b80e6352781b7ec1a83ca26d3031998997a6"} Jan 30 22:04:50 crc kubenswrapper[4869]: I0130 22:04:50.339821 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-rzlf9" event={"ID":"de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a","Type":"ContainerDied","Data":"6a4808ae097d9d95d0a01fc236639b2f79f44eadbd4cb196a626551c8cd7b4ef"} Jan 30 22:04:50 crc kubenswrapper[4869]: I0130 22:04:50.339858 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-rzlf9" Jan 30 22:04:50 crc kubenswrapper[4869]: I0130 22:04:50.353745 4869 scope.go:117] "RemoveContainer" containerID="937d11294624b3c3ef6e8838dcb5b80e6352781b7ec1a83ca26d3031998997a6" Jan 30 22:04:50 crc kubenswrapper[4869]: I0130 22:04:50.357430 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t"] Jan 30 22:04:50 crc kubenswrapper[4869]: I0130 22:04:50.361892 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-26p8t"] Jan 30 22:04:50 crc kubenswrapper[4869]: I0130 22:04:50.369835 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-rzlf9"] Jan 30 22:04:50 crc kubenswrapper[4869]: I0130 22:04:50.375841 4869 scope.go:117] "RemoveContainer" containerID="937d11294624b3c3ef6e8838dcb5b80e6352781b7ec1a83ca26d3031998997a6" Jan 30 22:04:50 crc kubenswrapper[4869]: E0130 22:04:50.376358 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"937d11294624b3c3ef6e8838dcb5b80e6352781b7ec1a83ca26d3031998997a6\": container with ID starting with 937d11294624b3c3ef6e8838dcb5b80e6352781b7ec1a83ca26d3031998997a6 not found: ID does not exist" containerID="937d11294624b3c3ef6e8838dcb5b80e6352781b7ec1a83ca26d3031998997a6" Jan 30 22:04:50 crc kubenswrapper[4869]: I0130 22:04:50.376391 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"937d11294624b3c3ef6e8838dcb5b80e6352781b7ec1a83ca26d3031998997a6"} err="failed to get container status \"937d11294624b3c3ef6e8838dcb5b80e6352781b7ec1a83ca26d3031998997a6\": rpc error: code = NotFound desc = could not find container \"937d11294624b3c3ef6e8838dcb5b80e6352781b7ec1a83ca26d3031998997a6\": container with ID starting with 937d11294624b3c3ef6e8838dcb5b80e6352781b7ec1a83ca26d3031998997a6 not found: ID does not exist" Jan 30 22:04:50 crc kubenswrapper[4869]: I0130 22:04:50.377225 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-rzlf9"] Jan 30 22:04:51 crc kubenswrapper[4869]: I0130 22:04:51.353098 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6"] Jan 30 22:04:51 crc kubenswrapper[4869]: I0130 22:04:51.353329 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" podUID="b5313756-52d9-4e0c-b328-4fb9609b70a9" containerName="manager" containerID="cri-o://eb3228e80004862e9b62c32d689f2f110876e6f1516e4476deb1ca11b0a5f611" gracePeriod=10 Jan 30 22:04:51 crc kubenswrapper[4869]: I0130 22:04:51.582542 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-jsvkg"] Jan 30 22:04:51 crc kubenswrapper[4869]: I0130 22:04:51.583043 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-jsvkg" podUID="f8e80c91-24a2-43d0-8add-685b3fb41e69" containerName="registry-server" containerID="cri-o://abf4d6296394e843caa120a853972aeef205a4e9957fd3b14965c4dd732e24b9" gracePeriod=30 Jan 30 22:04:51 crc kubenswrapper[4869]: I0130 22:04:51.630939 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt"] Jan 30 22:04:51 crc kubenswrapper[4869]: I0130 22:04:51.662055 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576ds5gt"] Jan 30 22:04:51 crc kubenswrapper[4869]: I0130 22:04:51.847098 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" Jan 30 22:04:51 crc kubenswrapper[4869]: I0130 22:04:51.884193 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20d7cc6e-b969-4768-978d-534adef89f4f" path="/var/lib/kubelet/pods/20d7cc6e-b969-4768-978d-534adef89f4f/volumes" Jan 30 22:04:51 crc kubenswrapper[4869]: I0130 22:04:51.884632 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a" path="/var/lib/kubelet/pods/de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a/volumes" Jan 30 22:04:51 crc kubenswrapper[4869]: I0130 22:04:51.885107 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffc49835-5a2c-434a-984b-10abf3fe7a55" path="/var/lib/kubelet/pods/ffc49835-5a2c-434a-984b-10abf3fe7a55/volumes" Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.030677 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-jsvkg" Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.040354 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5313756-52d9-4e0c-b328-4fb9609b70a9-apiservice-cert\") pod \"b5313756-52d9-4e0c-b328-4fb9609b70a9\" (UID: \"b5313756-52d9-4e0c-b328-4fb9609b70a9\") " Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.040425 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbtqp\" (UniqueName: \"kubernetes.io/projected/b5313756-52d9-4e0c-b328-4fb9609b70a9-kube-api-access-fbtqp\") pod \"b5313756-52d9-4e0c-b328-4fb9609b70a9\" (UID: \"b5313756-52d9-4e0c-b328-4fb9609b70a9\") " Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.040521 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5313756-52d9-4e0c-b328-4fb9609b70a9-webhook-cert\") pod \"b5313756-52d9-4e0c-b328-4fb9609b70a9\" (UID: \"b5313756-52d9-4e0c-b328-4fb9609b70a9\") " Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.046147 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5313756-52d9-4e0c-b328-4fb9609b70a9-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "b5313756-52d9-4e0c-b328-4fb9609b70a9" (UID: "b5313756-52d9-4e0c-b328-4fb9609b70a9"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.046879 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5313756-52d9-4e0c-b328-4fb9609b70a9-kube-api-access-fbtqp" (OuterVolumeSpecName: "kube-api-access-fbtqp") pod "b5313756-52d9-4e0c-b328-4fb9609b70a9" (UID: "b5313756-52d9-4e0c-b328-4fb9609b70a9"). InnerVolumeSpecName "kube-api-access-fbtqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.047295 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5313756-52d9-4e0c-b328-4fb9609b70a9-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b5313756-52d9-4e0c-b328-4fb9609b70a9" (UID: "b5313756-52d9-4e0c-b328-4fb9609b70a9"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.141636 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7w6n\" (UniqueName: \"kubernetes.io/projected/f8e80c91-24a2-43d0-8add-685b3fb41e69-kube-api-access-r7w6n\") pod \"f8e80c91-24a2-43d0-8add-685b3fb41e69\" (UID: \"f8e80c91-24a2-43d0-8add-685b3fb41e69\") " Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.141971 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbtqp\" (UniqueName: \"kubernetes.io/projected/b5313756-52d9-4e0c-b328-4fb9609b70a9-kube-api-access-fbtqp\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.141985 4869 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5313756-52d9-4e0c-b328-4fb9609b70a9-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.141994 4869 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5313756-52d9-4e0c-b328-4fb9609b70a9-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.145297 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e80c91-24a2-43d0-8add-685b3fb41e69-kube-api-access-r7w6n" (OuterVolumeSpecName: "kube-api-access-r7w6n") pod "f8e80c91-24a2-43d0-8add-685b3fb41e69" (UID: "f8e80c91-24a2-43d0-8add-685b3fb41e69"). InnerVolumeSpecName "kube-api-access-r7w6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.243871 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7w6n\" (UniqueName: \"kubernetes.io/projected/f8e80c91-24a2-43d0-8add-685b3fb41e69-kube-api-access-r7w6n\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.357754 4869 generic.go:334] "Generic (PLEG): container finished" podID="f8e80c91-24a2-43d0-8add-685b3fb41e69" containerID="abf4d6296394e843caa120a853972aeef205a4e9957fd3b14965c4dd732e24b9" exitCode=0 Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.357833 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-jsvkg" event={"ID":"f8e80c91-24a2-43d0-8add-685b3fb41e69","Type":"ContainerDied","Data":"abf4d6296394e843caa120a853972aeef205a4e9957fd3b14965c4dd732e24b9"} Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.357853 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-jsvkg" Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.357873 4869 scope.go:117] "RemoveContainer" containerID="abf4d6296394e843caa120a853972aeef205a4e9957fd3b14965c4dd732e24b9" Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.357862 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-jsvkg" event={"ID":"f8e80c91-24a2-43d0-8add-685b3fb41e69","Type":"ContainerDied","Data":"b42c4582217985f1a6edef7323be0879a99af3eb77add3cb468b295bf9ab3229"} Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.360128 4869 generic.go:334] "Generic (PLEG): container finished" podID="b5313756-52d9-4e0c-b328-4fb9609b70a9" containerID="eb3228e80004862e9b62c32d689f2f110876e6f1516e4476deb1ca11b0a5f611" exitCode=0 Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.360180 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" event={"ID":"b5313756-52d9-4e0c-b328-4fb9609b70a9","Type":"ContainerDied","Data":"eb3228e80004862e9b62c32d689f2f110876e6f1516e4476deb1ca11b0a5f611"} Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.360212 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" event={"ID":"b5313756-52d9-4e0c-b328-4fb9609b70a9","Type":"ContainerDied","Data":"200bbfefef6ab015e76dfdd0b5fb0c70d9dc7545be256cdef0dd62daf456a519"} Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.360268 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6" Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.376517 4869 scope.go:117] "RemoveContainer" containerID="abf4d6296394e843caa120a853972aeef205a4e9957fd3b14965c4dd732e24b9" Jan 30 22:04:52 crc kubenswrapper[4869]: E0130 22:04:52.377248 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abf4d6296394e843caa120a853972aeef205a4e9957fd3b14965c4dd732e24b9\": container with ID starting with abf4d6296394e843caa120a853972aeef205a4e9957fd3b14965c4dd732e24b9 not found: ID does not exist" containerID="abf4d6296394e843caa120a853972aeef205a4e9957fd3b14965c4dd732e24b9" Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.377344 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abf4d6296394e843caa120a853972aeef205a4e9957fd3b14965c4dd732e24b9"} err="failed to get container status \"abf4d6296394e843caa120a853972aeef205a4e9957fd3b14965c4dd732e24b9\": rpc error: code = NotFound desc = could not find container \"abf4d6296394e843caa120a853972aeef205a4e9957fd3b14965c4dd732e24b9\": container with ID starting with abf4d6296394e843caa120a853972aeef205a4e9957fd3b14965c4dd732e24b9 not found: ID does not exist" Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.377422 4869 scope.go:117] "RemoveContainer" containerID="eb3228e80004862e9b62c32d689f2f110876e6f1516e4476deb1ca11b0a5f611" Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.397266 4869 scope.go:117] "RemoveContainer" containerID="eb3228e80004862e9b62c32d689f2f110876e6f1516e4476deb1ca11b0a5f611" Jan 30 22:04:52 crc kubenswrapper[4869]: E0130 22:04:52.398151 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb3228e80004862e9b62c32d689f2f110876e6f1516e4476deb1ca11b0a5f611\": container with ID starting with eb3228e80004862e9b62c32d689f2f110876e6f1516e4476deb1ca11b0a5f611 not found: ID does not exist" containerID="eb3228e80004862e9b62c32d689f2f110876e6f1516e4476deb1ca11b0a5f611" Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.398199 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb3228e80004862e9b62c32d689f2f110876e6f1516e4476deb1ca11b0a5f611"} err="failed to get container status \"eb3228e80004862e9b62c32d689f2f110876e6f1516e4476deb1ca11b0a5f611\": rpc error: code = NotFound desc = could not find container \"eb3228e80004862e9b62c32d689f2f110876e6f1516e4476deb1ca11b0a5f611\": container with ID starting with eb3228e80004862e9b62c32d689f2f110876e6f1516e4476deb1ca11b0a5f611 not found: ID does not exist" Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.421362 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6"] Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.422868 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-controller-manager-748fc89b74-xknf6"] Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.434601 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-jsvkg"] Jan 30 22:04:52 crc kubenswrapper[4869]: I0130 22:04:52.438407 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-jsvkg"] Jan 30 22:04:53 crc kubenswrapper[4869]: I0130 22:04:53.261332 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc"] Jan 30 22:04:53 crc kubenswrapper[4869]: I0130 22:04:53.261575 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" podUID="54633931-4810-498b-9b01-c800f623a2d4" containerName="manager" containerID="cri-o://376da18648cbc1ea030207551d3774a8f23dadf5eeb2cb4efe225172ab00db96" gracePeriod=10 Jan 30 22:04:53 crc kubenswrapper[4869]: I0130 22:04:53.549427 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-blgg8"] Jan 30 22:04:53 crc kubenswrapper[4869]: I0130 22:04:53.549671 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-blgg8" podUID="af9e7db8-505e-442f-86a8-791e8196ecc0" containerName="registry-server" containerID="cri-o://a270ebc41d25ac141ed114b48c7c0137be9a32d0142c129cba75d19d9c497131" gracePeriod=30 Jan 30 22:04:53 crc kubenswrapper[4869]: I0130 22:04:53.586338 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd"] Jan 30 22:04:53 crc kubenswrapper[4869]: I0130 22:04:53.588655 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40wqhbd"] Jan 30 22:04:53 crc kubenswrapper[4869]: I0130 22:04:53.823721 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" Jan 30 22:04:53 crc kubenswrapper[4869]: I0130 22:04:53.887605 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5313756-52d9-4e0c-b328-4fb9609b70a9" path="/var/lib/kubelet/pods/b5313756-52d9-4e0c-b328-4fb9609b70a9/volumes" Jan 30 22:04:53 crc kubenswrapper[4869]: I0130 22:04:53.888106 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2" path="/var/lib/kubelet/pods/ba31501d-5f3e-4b55-ad2a-2f3275e5e0e2/volumes" Jan 30 22:04:53 crc kubenswrapper[4869]: I0130 22:04:53.888731 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e80c91-24a2-43d0-8add-685b3fb41e69" path="/var/lib/kubelet/pods/f8e80c91-24a2-43d0-8add-685b3fb41e69/volumes" Jan 30 22:04:53 crc kubenswrapper[4869]: I0130 22:04:53.900196 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-blgg8" Jan 30 22:04:53 crc kubenswrapper[4869]: I0130 22:04:53.963148 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54633931-4810-498b-9b01-c800f623a2d4-webhook-cert\") pod \"54633931-4810-498b-9b01-c800f623a2d4\" (UID: \"54633931-4810-498b-9b01-c800f623a2d4\") " Jan 30 22:04:53 crc kubenswrapper[4869]: I0130 22:04:53.963204 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54633931-4810-498b-9b01-c800f623a2d4-apiservice-cert\") pod \"54633931-4810-498b-9b01-c800f623a2d4\" (UID: \"54633931-4810-498b-9b01-c800f623a2d4\") " Jan 30 22:04:53 crc kubenswrapper[4869]: I0130 22:04:53.963281 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t76pr\" (UniqueName: \"kubernetes.io/projected/54633931-4810-498b-9b01-c800f623a2d4-kube-api-access-t76pr\") pod \"54633931-4810-498b-9b01-c800f623a2d4\" (UID: \"54633931-4810-498b-9b01-c800f623a2d4\") " Jan 30 22:04:53 crc kubenswrapper[4869]: I0130 22:04:53.967664 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54633931-4810-498b-9b01-c800f623a2d4-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "54633931-4810-498b-9b01-c800f623a2d4" (UID: "54633931-4810-498b-9b01-c800f623a2d4"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:53 crc kubenswrapper[4869]: I0130 22:04:53.967897 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54633931-4810-498b-9b01-c800f623a2d4-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "54633931-4810-498b-9b01-c800f623a2d4" (UID: "54633931-4810-498b-9b01-c800f623a2d4"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:53 crc kubenswrapper[4869]: I0130 22:04:53.974188 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54633931-4810-498b-9b01-c800f623a2d4-kube-api-access-t76pr" (OuterVolumeSpecName: "kube-api-access-t76pr") pod "54633931-4810-498b-9b01-c800f623a2d4" (UID: "54633931-4810-498b-9b01-c800f623a2d4"). InnerVolumeSpecName "kube-api-access-t76pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.064140 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2c9j\" (UniqueName: \"kubernetes.io/projected/af9e7db8-505e-442f-86a8-791e8196ecc0-kube-api-access-x2c9j\") pod \"af9e7db8-505e-442f-86a8-791e8196ecc0\" (UID: \"af9e7db8-505e-442f-86a8-791e8196ecc0\") " Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.064648 4869 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54633931-4810-498b-9b01-c800f623a2d4-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.064694 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t76pr\" (UniqueName: \"kubernetes.io/projected/54633931-4810-498b-9b01-c800f623a2d4-kube-api-access-t76pr\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.064711 4869 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54633931-4810-498b-9b01-c800f623a2d4-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.067074 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af9e7db8-505e-442f-86a8-791e8196ecc0-kube-api-access-x2c9j" (OuterVolumeSpecName: "kube-api-access-x2c9j") pod "af9e7db8-505e-442f-86a8-791e8196ecc0" (UID: "af9e7db8-505e-442f-86a8-791e8196ecc0"). InnerVolumeSpecName "kube-api-access-x2c9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.165751 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2c9j\" (UniqueName: \"kubernetes.io/projected/af9e7db8-505e-442f-86a8-791e8196ecc0-kube-api-access-x2c9j\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.393327 4869 generic.go:334] "Generic (PLEG): container finished" podID="54633931-4810-498b-9b01-c800f623a2d4" containerID="376da18648cbc1ea030207551d3774a8f23dadf5eeb2cb4efe225172ab00db96" exitCode=0 Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.393363 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.393431 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" event={"ID":"54633931-4810-498b-9b01-c800f623a2d4","Type":"ContainerDied","Data":"376da18648cbc1ea030207551d3774a8f23dadf5eeb2cb4efe225172ab00db96"} Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.393478 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc" event={"ID":"54633931-4810-498b-9b01-c800f623a2d4","Type":"ContainerDied","Data":"ea72e51bb5a8e3f58ebd8738f80bf2f4e72486c95e23f41ec2325d9c402ec77a"} Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.393498 4869 scope.go:117] "RemoveContainer" containerID="376da18648cbc1ea030207551d3774a8f23dadf5eeb2cb4efe225172ab00db96" Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.395035 4869 generic.go:334] "Generic (PLEG): container finished" podID="af9e7db8-505e-442f-86a8-791e8196ecc0" containerID="a270ebc41d25ac141ed114b48c7c0137be9a32d0142c129cba75d19d9c497131" exitCode=0 Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.395068 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-blgg8" event={"ID":"af9e7db8-505e-442f-86a8-791e8196ecc0","Type":"ContainerDied","Data":"a270ebc41d25ac141ed114b48c7c0137be9a32d0142c129cba75d19d9c497131"} Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.395110 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-blgg8" event={"ID":"af9e7db8-505e-442f-86a8-791e8196ecc0","Type":"ContainerDied","Data":"2b8af07a50615a54a398ae097dc27e424f3fbbf4bfd6dd9dd561cab500f74b8d"} Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.395108 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-blgg8" Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.413190 4869 scope.go:117] "RemoveContainer" containerID="376da18648cbc1ea030207551d3774a8f23dadf5eeb2cb4efe225172ab00db96" Jan 30 22:04:54 crc kubenswrapper[4869]: E0130 22:04:54.413551 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"376da18648cbc1ea030207551d3774a8f23dadf5eeb2cb4efe225172ab00db96\": container with ID starting with 376da18648cbc1ea030207551d3774a8f23dadf5eeb2cb4efe225172ab00db96 not found: ID does not exist" containerID="376da18648cbc1ea030207551d3774a8f23dadf5eeb2cb4efe225172ab00db96" Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.413580 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"376da18648cbc1ea030207551d3774a8f23dadf5eeb2cb4efe225172ab00db96"} err="failed to get container status \"376da18648cbc1ea030207551d3774a8f23dadf5eeb2cb4efe225172ab00db96\": rpc error: code = NotFound desc = could not find container \"376da18648cbc1ea030207551d3774a8f23dadf5eeb2cb4efe225172ab00db96\": container with ID starting with 376da18648cbc1ea030207551d3774a8f23dadf5eeb2cb4efe225172ab00db96 not found: ID does not exist" Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.413598 4869 scope.go:117] "RemoveContainer" containerID="a270ebc41d25ac141ed114b48c7c0137be9a32d0142c129cba75d19d9c497131" Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.425462 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc"] Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.431311 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-86cdc6c597-g94qc"] Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.434594 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-blgg8"] Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.435376 4869 scope.go:117] "RemoveContainer" containerID="a270ebc41d25ac141ed114b48c7c0137be9a32d0142c129cba75d19d9c497131" Jan 30 22:04:54 crc kubenswrapper[4869]: E0130 22:04:54.435812 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a270ebc41d25ac141ed114b48c7c0137be9a32d0142c129cba75d19d9c497131\": container with ID starting with a270ebc41d25ac141ed114b48c7c0137be9a32d0142c129cba75d19d9c497131 not found: ID does not exist" containerID="a270ebc41d25ac141ed114b48c7c0137be9a32d0142c129cba75d19d9c497131" Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.435839 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a270ebc41d25ac141ed114b48c7c0137be9a32d0142c129cba75d19d9c497131"} err="failed to get container status \"a270ebc41d25ac141ed114b48c7c0137be9a32d0142c129cba75d19d9c497131\": rpc error: code = NotFound desc = could not find container \"a270ebc41d25ac141ed114b48c7c0137be9a32d0142c129cba75d19d9c497131\": container with ID starting with a270ebc41d25ac141ed114b48c7c0137be9a32d0142c129cba75d19d9c497131 not found: ID does not exist" Jan 30 22:04:54 crc kubenswrapper[4869]: I0130 22:04:54.437663 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-blgg8"] Jan 30 22:04:55 crc kubenswrapper[4869]: I0130 22:04:55.885504 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54633931-4810-498b-9b01-c800f623a2d4" path="/var/lib/kubelet/pods/54633931-4810-498b-9b01-c800f623a2d4/volumes" Jan 30 22:04:55 crc kubenswrapper[4869]: I0130 22:04:55.886415 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af9e7db8-505e-442f-86a8-791e8196ecc0" path="/var/lib/kubelet/pods/af9e7db8-505e-442f-86a8-791e8196ecc0/volumes" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.454181 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-g8lrq/must-gather-2dbxg"] Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.454857 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f14ed92-142f-4e45-8be8-d60ab70d051a" containerName="mysql-bootstrap" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.454868 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f14ed92-142f-4e45-8be8-d60ab70d051a" containerName="mysql-bootstrap" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.454881 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29afe53d-3124-4365-ab6a-abe5b7630a4b" containerName="registry-server" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.454887 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="29afe53d-3124-4365-ab6a-abe5b7630a4b" containerName="registry-server" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.454921 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d7cc6e-b969-4768-978d-534adef89f4f" containerName="operator" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.454933 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d7cc6e-b969-4768-978d-534adef89f4f" containerName="operator" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.454943 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a" containerName="registry-server" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.454950 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a" containerName="registry-server" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.454961 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd1249b-e125-4034-a274-8cf26b3e9b3a" containerName="galera" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.454966 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd1249b-e125-4034-a274-8cf26b3e9b3a" containerName="galera" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.454976 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c7d59e0-6997-436f-b17d-67e8d1c0f319" containerName="registry-server" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.454982 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c7d59e0-6997-436f-b17d-67e8d1c0f319" containerName="registry-server" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.454989 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ec6fcd-3470-4626-a408-92e100dabfdd" containerName="mariadb-account-delete" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.454994 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ec6fcd-3470-4626-a408-92e100dabfdd" containerName="mariadb-account-delete" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.455001 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54633931-4810-498b-9b01-c800f623a2d4" containerName="manager" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455006 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="54633931-4810-498b-9b01-c800f623a2d4" containerName="manager" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.455015 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e80c91-24a2-43d0-8add-685b3fb41e69" containerName="registry-server" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455020 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e80c91-24a2-43d0-8add-685b3fb41e69" containerName="registry-server" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.455030 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd1249b-e125-4034-a274-8cf26b3e9b3a" containerName="mysql-bootstrap" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455035 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd1249b-e125-4034-a274-8cf26b3e9b3a" containerName="mysql-bootstrap" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.455044 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba1f617-3e3a-4d7c-9374-ed5a271550d3" containerName="galera" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455049 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba1f617-3e3a-4d7c-9374-ed5a271550d3" containerName="galera" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.455059 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce" containerName="keystone-api" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455066 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce" containerName="keystone-api" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.455074 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f14ed92-142f-4e45-8be8-d60ab70d051a" containerName="galera" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455080 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f14ed92-142f-4e45-8be8-d60ab70d051a" containerName="galera" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.455089 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111e74f4-fd99-4f7d-8057-43794129795f" containerName="setup-container" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455095 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="111e74f4-fd99-4f7d-8057-43794129795f" containerName="setup-container" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.455104 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c3e212-4606-4375-96dd-b1fcf8a40d94" containerName="manager" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455110 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c3e212-4606-4375-96dd-b1fcf8a40d94" containerName="manager" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.455117 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73299ef4-0062-46e4-a329-078037f1ef33" containerName="cinder-volume" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455122 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="73299ef4-0062-46e4-a329-078037f1ef33" containerName="cinder-volume" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.455131 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5313756-52d9-4e0c-b328-4fb9609b70a9" containerName="manager" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455139 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5313756-52d9-4e0c-b328-4fb9609b70a9" containerName="manager" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.455147 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af9e7db8-505e-442f-86a8-791e8196ecc0" containerName="registry-server" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455153 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="af9e7db8-505e-442f-86a8-791e8196ecc0" containerName="registry-server" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.455161 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba1f617-3e3a-4d7c-9374-ed5a271550d3" containerName="mysql-bootstrap" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455167 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba1f617-3e3a-4d7c-9374-ed5a271550d3" containerName="mysql-bootstrap" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.455175 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e27b23c-3307-49b1-93be-8188ed81865f" containerName="memcached" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455181 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e27b23c-3307-49b1-93be-8188ed81865f" containerName="memcached" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.455190 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2d32a5-f5a1-4e59-908b-0d55de2c600f" containerName="manager" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455196 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2d32a5-f5a1-4e59-908b-0d55de2c600f" containerName="manager" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.455204 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111e74f4-fd99-4f7d-8057-43794129795f" containerName="rabbitmq" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455209 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="111e74f4-fd99-4f7d-8057-43794129795f" containerName="rabbitmq" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455296 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="111e74f4-fd99-4f7d-8057-43794129795f" containerName="rabbitmq" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455307 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="73299ef4-0062-46e4-a329-078037f1ef33" containerName="probe" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455315 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="af9e7db8-505e-442f-86a8-791e8196ecc0" containerName="registry-server" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455322 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c7d59e0-6997-436f-b17d-67e8d1c0f319" containerName="registry-server" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455331 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="54633931-4810-498b-9b01-c800f623a2d4" containerName="manager" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455338 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e80c91-24a2-43d0-8add-685b3fb41e69" containerName="registry-server" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455347 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e27b23c-3307-49b1-93be-8188ed81865f" containerName="memcached" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455355 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f14ed92-142f-4e45-8be8-d60ab70d051a" containerName="galera" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455361 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6ec6fcd-3470-4626-a408-92e100dabfdd" containerName="mariadb-account-delete" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455368 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5313756-52d9-4e0c-b328-4fb9609b70a9" containerName="manager" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455375 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="44dcbfe1-c9ef-44ec-b14f-0c3d1afe14ce" containerName="keystone-api" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455384 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6ec6fcd-3470-4626-a408-92e100dabfdd" containerName="mariadb-account-delete" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455391 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d7cc6e-b969-4768-978d-534adef89f4f" containerName="operator" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455397 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="29afe53d-3124-4365-ab6a-abe5b7630a4b" containerName="registry-server" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455403 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="de8dd8df-568e-4e5c-a2e7-5dcbe0c8e92a" containerName="registry-server" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455412 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dd1249b-e125-4034-a274-8cf26b3e9b3a" containerName="galera" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455420 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="13c3e212-4606-4375-96dd-b1fcf8a40d94" containerName="manager" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455426 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba1f617-3e3a-4d7c-9374-ed5a271550d3" containerName="galera" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455435 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2d32a5-f5a1-4e59-908b-0d55de2c600f" containerName="manager" Jan 30 22:05:07 crc kubenswrapper[4869]: E0130 22:05:07.455548 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ec6fcd-3470-4626-a408-92e100dabfdd" containerName="mariadb-account-delete" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.455558 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ec6fcd-3470-4626-a408-92e100dabfdd" containerName="mariadb-account-delete" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.456041 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g8lrq/must-gather-2dbxg" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.461301 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-g8lrq"/"openshift-service-ca.crt" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.461889 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-g8lrq"/"default-dockercfg-jn97q" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.462673 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-g8lrq"/"kube-root-ca.crt" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.513839 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-g8lrq/must-gather-2dbxg"] Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.652005 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwkhk\" (UniqueName: \"kubernetes.io/projected/79d4e833-512b-406a-b10a-b24fa5385330-kube-api-access-wwkhk\") pod \"must-gather-2dbxg\" (UID: \"79d4e833-512b-406a-b10a-b24fa5385330\") " pod="openshift-must-gather-g8lrq/must-gather-2dbxg" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.652222 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/79d4e833-512b-406a-b10a-b24fa5385330-must-gather-output\") pod \"must-gather-2dbxg\" (UID: \"79d4e833-512b-406a-b10a-b24fa5385330\") " pod="openshift-must-gather-g8lrq/must-gather-2dbxg" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.753337 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/79d4e833-512b-406a-b10a-b24fa5385330-must-gather-output\") pod \"must-gather-2dbxg\" (UID: \"79d4e833-512b-406a-b10a-b24fa5385330\") " pod="openshift-must-gather-g8lrq/must-gather-2dbxg" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.753392 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwkhk\" (UniqueName: \"kubernetes.io/projected/79d4e833-512b-406a-b10a-b24fa5385330-kube-api-access-wwkhk\") pod \"must-gather-2dbxg\" (UID: \"79d4e833-512b-406a-b10a-b24fa5385330\") " pod="openshift-must-gather-g8lrq/must-gather-2dbxg" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.753813 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/79d4e833-512b-406a-b10a-b24fa5385330-must-gather-output\") pod \"must-gather-2dbxg\" (UID: \"79d4e833-512b-406a-b10a-b24fa5385330\") " pod="openshift-must-gather-g8lrq/must-gather-2dbxg" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.771813 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwkhk\" (UniqueName: \"kubernetes.io/projected/79d4e833-512b-406a-b10a-b24fa5385330-kube-api-access-wwkhk\") pod \"must-gather-2dbxg\" (UID: \"79d4e833-512b-406a-b10a-b24fa5385330\") " pod="openshift-must-gather-g8lrq/must-gather-2dbxg" Jan 30 22:05:07 crc kubenswrapper[4869]: I0130 22:05:07.772551 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g8lrq/must-gather-2dbxg" Jan 30 22:05:08 crc kubenswrapper[4869]: I0130 22:05:08.205395 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-g8lrq/must-gather-2dbxg"] Jan 30 22:05:08 crc kubenswrapper[4869]: I0130 22:05:08.489521 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g8lrq/must-gather-2dbxg" event={"ID":"79d4e833-512b-406a-b10a-b24fa5385330","Type":"ContainerStarted","Data":"9b99e73039664b24c5d37a9ccfe65530e96afccafbb73757f028bee868703906"} Jan 30 22:05:12 crc kubenswrapper[4869]: I0130 22:05:12.517096 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g8lrq/must-gather-2dbxg" event={"ID":"79d4e833-512b-406a-b10a-b24fa5385330","Type":"ContainerStarted","Data":"aecb3bf74d890579bff2597c9eee782c6a20d8a4ac4c610809feebe323f5f65f"} Jan 30 22:05:12 crc kubenswrapper[4869]: I0130 22:05:12.517449 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g8lrq/must-gather-2dbxg" event={"ID":"79d4e833-512b-406a-b10a-b24fa5385330","Type":"ContainerStarted","Data":"77f339fcd5e04ff9e6e64c873351252dd0731032a91e84e6947ec00d49fdfda5"} Jan 30 22:05:12 crc kubenswrapper[4869]: I0130 22:05:12.537328 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-g8lrq/must-gather-2dbxg" podStartSLOduration=2.127004178 podStartE2EDuration="5.537307349s" podCreationTimestamp="2026-01-30 22:05:07 +0000 UTC" firstStartedPulling="2026-01-30 22:05:08.221761026 +0000 UTC m=+1309.107519051" lastFinishedPulling="2026-01-30 22:05:11.632064197 +0000 UTC m=+1312.517822222" observedRunningTime="2026-01-30 22:05:12.533476178 +0000 UTC m=+1313.419234193" watchObservedRunningTime="2026-01-30 22:05:12.537307349 +0000 UTC m=+1313.423065374" Jan 30 22:05:21 crc kubenswrapper[4869]: I0130 22:05:21.664366 4869 scope.go:117] "RemoveContainer" containerID="477cfab7e743ae7cffaef8dcac7ca16dd2d26e05d6beadff5bfa60771c8d491c" Jan 30 22:05:21 crc kubenswrapper[4869]: I0130 22:05:21.691146 4869 scope.go:117] "RemoveContainer" containerID="075f374a74d264367e35ff42061cd9f043b3cfc3a9cad6906302a578ec1aab73" Jan 30 22:05:21 crc kubenswrapper[4869]: I0130 22:05:21.716961 4869 scope.go:117] "RemoveContainer" containerID="1f2f4f3e6f6c27924cc9dc8868d50d7db253eb44104a2c62c54442508e2f2298" Jan 30 22:05:21 crc kubenswrapper[4869]: I0130 22:05:21.738231 4869 scope.go:117] "RemoveContainer" containerID="f44ea8bbb40d5ad79caab596094bb2225e4866c149dbdb45596a79a3c1fe6beb" Jan 30 22:05:21 crc kubenswrapper[4869]: I0130 22:05:21.758499 4869 scope.go:117] "RemoveContainer" containerID="7c0f612d2cb5372b8d43d893cfb72ba389ec13907851a5b3cc6c66677d3dd5d0" Jan 30 22:05:21 crc kubenswrapper[4869]: I0130 22:05:21.784702 4869 scope.go:117] "RemoveContainer" containerID="eca79613af9ac95a5f7ff5c974f679d37331b818e017ddd0e1942adf923e5e74" Jan 30 22:05:21 crc kubenswrapper[4869]: I0130 22:05:21.812612 4869 scope.go:117] "RemoveContainer" containerID="a33228a91172479c36e2b830facb599cc42c522f07069eaa1a2c0e581ca09f15" Jan 30 22:05:21 crc kubenswrapper[4869]: I0130 22:05:21.828612 4869 scope.go:117] "RemoveContainer" containerID="1e2dd1dc956e93448f8b4b7e4c916967b3c8a5f37a4661a5081e243a03675a04" Jan 30 22:05:21 crc kubenswrapper[4869]: I0130 22:05:21.844488 4869 scope.go:117] "RemoveContainer" containerID="1bb7e6f5ee43527aa4c53bbb2b1151c568ca6ba344a24a047151be5ac55e9adf" Jan 30 22:05:21 crc kubenswrapper[4869]: I0130 22:05:21.863389 4869 scope.go:117] "RemoveContainer" containerID="e1a7767c19f8c6c9622447533b51eb908722efe806cc7054240f362d0b306825" Jan 30 22:05:57 crc kubenswrapper[4869]: I0130 22:05:57.885117 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-6bxlw_492fddc8-5b29-4b32-9b4c-9831317fae23/control-plane-machine-set-operator/0.log" Jan 30 22:05:58 crc kubenswrapper[4869]: I0130 22:05:58.034574 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-npnfg_c1f0b262-4d72-49a2-aa45-918fbc89a9f2/kube-rbac-proxy/0.log" Jan 30 22:05:58 crc kubenswrapper[4869]: I0130 22:05:58.079067 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-npnfg_c1f0b262-4d72-49a2-aa45-918fbc89a9f2/machine-api-operator/0.log" Jan 30 22:06:22 crc kubenswrapper[4869]: I0130 22:06:22.145736 4869 scope.go:117] "RemoveContainer" containerID="b5b68a766153b7c1740db5d50f8417c5a60993c1095b44ba6b537ec44280ef15" Jan 30 22:06:22 crc kubenswrapper[4869]: I0130 22:06:22.165171 4869 scope.go:117] "RemoveContainer" containerID="da7d1e5d796108713a867864f44ee22c88cebe365e7c7ac36f5d999dea8beac9" Jan 30 22:06:22 crc kubenswrapper[4869]: I0130 22:06:22.195940 4869 scope.go:117] "RemoveContainer" containerID="281427bf1e5c442d2cbb3ea76325ad3c318e506d18fc520094ce4db1d578ce55" Jan 30 22:06:26 crc kubenswrapper[4869]: I0130 22:06:26.377443 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-kjnqq_48ddd270-2e1a-4924-905c-89327f9fd1f4/kube-rbac-proxy/0.log" Jan 30 22:06:26 crc kubenswrapper[4869]: I0130 22:06:26.403333 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-kjnqq_48ddd270-2e1a-4924-905c-89327f9fd1f4/controller/0.log" Jan 30 22:06:26 crc kubenswrapper[4869]: I0130 22:06:26.523037 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-frr-files/0.log" Jan 30 22:06:26 crc kubenswrapper[4869]: I0130 22:06:26.726159 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-reloader/0.log" Jan 30 22:06:26 crc kubenswrapper[4869]: I0130 22:06:26.783430 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-frr-files/0.log" Jan 30 22:06:26 crc kubenswrapper[4869]: I0130 22:06:26.792679 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-metrics/0.log" Jan 30 22:06:26 crc kubenswrapper[4869]: I0130 22:06:26.826449 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-reloader/0.log" Jan 30 22:06:26 crc kubenswrapper[4869]: I0130 22:06:26.983547 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-frr-files/0.log" Jan 30 22:06:27 crc kubenswrapper[4869]: I0130 22:06:27.025715 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-metrics/0.log" Jan 30 22:06:27 crc kubenswrapper[4869]: I0130 22:06:27.037801 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-reloader/0.log" Jan 30 22:06:27 crc kubenswrapper[4869]: I0130 22:06:27.042926 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-metrics/0.log" Jan 30 22:06:27 crc kubenswrapper[4869]: I0130 22:06:27.240071 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-frr-files/0.log" Jan 30 22:06:27 crc kubenswrapper[4869]: I0130 22:06:27.240106 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-reloader/0.log" Jan 30 22:06:27 crc kubenswrapper[4869]: I0130 22:06:27.245757 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/controller/0.log" Jan 30 22:06:27 crc kubenswrapper[4869]: I0130 22:06:27.291686 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-metrics/0.log" Jan 30 22:06:27 crc kubenswrapper[4869]: I0130 22:06:27.435909 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/frr-metrics/0.log" Jan 30 22:06:27 crc kubenswrapper[4869]: I0130 22:06:27.492121 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/kube-rbac-proxy-frr/0.log" Jan 30 22:06:27 crc kubenswrapper[4869]: I0130 22:06:27.519417 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/kube-rbac-proxy/0.log" Jan 30 22:06:27 crc kubenswrapper[4869]: I0130 22:06:27.726741 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/frr/0.log" Jan 30 22:06:27 crc kubenswrapper[4869]: I0130 22:06:27.812584 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/reloader/0.log" Jan 30 22:06:27 crc kubenswrapper[4869]: I0130 22:06:27.838548 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-pdqbz_8df09342-d1b4-46c8-b073-756f9c26e15b/frr-k8s-webhook-server/0.log" Jan 30 22:06:27 crc kubenswrapper[4869]: I0130 22:06:27.975517 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-c784b4f9f-nctpd_74d47ca3-77d5-40b2-bf78-e6434e094b98/manager/0.log" Jan 30 22:06:28 crc kubenswrapper[4869]: I0130 22:06:28.036018 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7866d54458-pq5sd_b8abd45b-148e-4550-9f42-7ebb36bc52a3/webhook-server/0.log" Jan 30 22:06:28 crc kubenswrapper[4869]: I0130 22:06:28.200996 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2fslp_bb7f0287-c0d2-4a75-b392-5d143f6a9eb6/kube-rbac-proxy/0.log" Jan 30 22:06:28 crc kubenswrapper[4869]: I0130 22:06:28.310155 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2fslp_bb7f0287-c0d2-4a75-b392-5d143f6a9eb6/speaker/0.log" Jan 30 22:06:52 crc kubenswrapper[4869]: I0130 22:06:52.509526 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244_565c0cfa-b127-4da5-a3be-d660b5224997/util/0.log" Jan 30 22:06:52 crc kubenswrapper[4869]: I0130 22:06:52.766735 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244_565c0cfa-b127-4da5-a3be-d660b5224997/pull/0.log" Jan 30 22:06:52 crc kubenswrapper[4869]: I0130 22:06:52.769667 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244_565c0cfa-b127-4da5-a3be-d660b5224997/pull/0.log" Jan 30 22:06:52 crc kubenswrapper[4869]: I0130 22:06:52.787708 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244_565c0cfa-b127-4da5-a3be-d660b5224997/util/0.log" Jan 30 22:06:52 crc kubenswrapper[4869]: I0130 22:06:52.950656 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244_565c0cfa-b127-4da5-a3be-d660b5224997/util/0.log" Jan 30 22:06:52 crc kubenswrapper[4869]: I0130 22:06:52.978468 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244_565c0cfa-b127-4da5-a3be-d660b5224997/pull/0.log" Jan 30 22:06:52 crc kubenswrapper[4869]: I0130 22:06:52.995904 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244_565c0cfa-b127-4da5-a3be-d660b5224997/extract/0.log" Jan 30 22:06:53 crc kubenswrapper[4869]: I0130 22:06:53.130581 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-djhl2_9f702f9e-18d7-4559-8745-8b691886d766/extract-utilities/0.log" Jan 30 22:06:53 crc kubenswrapper[4869]: I0130 22:06:53.331638 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-djhl2_9f702f9e-18d7-4559-8745-8b691886d766/extract-utilities/0.log" Jan 30 22:06:53 crc kubenswrapper[4869]: I0130 22:06:53.342551 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-djhl2_9f702f9e-18d7-4559-8745-8b691886d766/extract-content/0.log" Jan 30 22:06:53 crc kubenswrapper[4869]: I0130 22:06:53.367940 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-djhl2_9f702f9e-18d7-4559-8745-8b691886d766/extract-content/0.log" Jan 30 22:06:53 crc kubenswrapper[4869]: I0130 22:06:53.557257 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-djhl2_9f702f9e-18d7-4559-8745-8b691886d766/extract-content/0.log" Jan 30 22:06:53 crc kubenswrapper[4869]: I0130 22:06:53.582064 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-djhl2_9f702f9e-18d7-4559-8745-8b691886d766/extract-utilities/0.log" Jan 30 22:06:53 crc kubenswrapper[4869]: I0130 22:06:53.778681 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxw2l_2326e182-60d7-4af7-8845-8e688d90b0a1/extract-utilities/0.log" Jan 30 22:06:53 crc kubenswrapper[4869]: I0130 22:06:53.971621 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-djhl2_9f702f9e-18d7-4559-8745-8b691886d766/registry-server/0.log" Jan 30 22:06:54 crc kubenswrapper[4869]: I0130 22:06:54.095094 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxw2l_2326e182-60d7-4af7-8845-8e688d90b0a1/extract-content/0.log" Jan 30 22:06:54 crc kubenswrapper[4869]: I0130 22:06:54.201390 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxw2l_2326e182-60d7-4af7-8845-8e688d90b0a1/extract-utilities/0.log" Jan 30 22:06:54 crc kubenswrapper[4869]: I0130 22:06:54.202250 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxw2l_2326e182-60d7-4af7-8845-8e688d90b0a1/extract-content/0.log" Jan 30 22:06:54 crc kubenswrapper[4869]: I0130 22:06:54.326469 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxw2l_2326e182-60d7-4af7-8845-8e688d90b0a1/extract-utilities/0.log" Jan 30 22:06:54 crc kubenswrapper[4869]: I0130 22:06:54.327843 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxw2l_2326e182-60d7-4af7-8845-8e688d90b0a1/extract-content/0.log" Jan 30 22:06:54 crc kubenswrapper[4869]: I0130 22:06:54.556167 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-55fhb_059ebbdc-d9b5-4a32-a167-30dfeae746ff/marketplace-operator/0.log" Jan 30 22:06:54 crc kubenswrapper[4869]: I0130 22:06:54.630660 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h9l9x_dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae/extract-utilities/0.log" Jan 30 22:06:54 crc kubenswrapper[4869]: I0130 22:06:54.706811 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxw2l_2326e182-60d7-4af7-8845-8e688d90b0a1/registry-server/0.log" Jan 30 22:06:54 crc kubenswrapper[4869]: I0130 22:06:54.844117 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h9l9x_dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae/extract-utilities/0.log" Jan 30 22:06:54 crc kubenswrapper[4869]: I0130 22:06:54.874181 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h9l9x_dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae/extract-content/0.log" Jan 30 22:06:54 crc kubenswrapper[4869]: I0130 22:06:54.887676 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h9l9x_dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae/extract-content/0.log" Jan 30 22:06:55 crc kubenswrapper[4869]: I0130 22:06:55.041202 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h9l9x_dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae/extract-utilities/0.log" Jan 30 22:06:55 crc kubenswrapper[4869]: I0130 22:06:55.105095 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h9l9x_dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae/extract-content/0.log" Jan 30 22:06:55 crc kubenswrapper[4869]: I0130 22:06:55.138030 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h9l9x_dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae/registry-server/0.log" Jan 30 22:06:55 crc kubenswrapper[4869]: I0130 22:06:55.248916 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9n2gh_6abcba63-fc26-470d-b5bb-1a9e084cb65f/extract-utilities/0.log" Jan 30 22:06:55 crc kubenswrapper[4869]: I0130 22:06:55.431586 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9n2gh_6abcba63-fc26-470d-b5bb-1a9e084cb65f/extract-content/0.log" Jan 30 22:06:55 crc kubenswrapper[4869]: I0130 22:06:55.431809 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9n2gh_6abcba63-fc26-470d-b5bb-1a9e084cb65f/extract-utilities/0.log" Jan 30 22:06:55 crc kubenswrapper[4869]: I0130 22:06:55.460455 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9n2gh_6abcba63-fc26-470d-b5bb-1a9e084cb65f/extract-content/0.log" Jan 30 22:06:55 crc kubenswrapper[4869]: I0130 22:06:55.674101 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9n2gh_6abcba63-fc26-470d-b5bb-1a9e084cb65f/extract-utilities/0.log" Jan 30 22:06:55 crc kubenswrapper[4869]: I0130 22:06:55.707102 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9n2gh_6abcba63-fc26-470d-b5bb-1a9e084cb65f/extract-content/0.log" Jan 30 22:06:55 crc kubenswrapper[4869]: I0130 22:06:55.981820 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9n2gh_6abcba63-fc26-470d-b5bb-1a9e084cb65f/registry-server/0.log" Jan 30 22:07:01 crc kubenswrapper[4869]: I0130 22:07:01.990919 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:07:01 crc kubenswrapper[4869]: I0130 22:07:01.992215 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:07:22 crc kubenswrapper[4869]: I0130 22:07:22.240849 4869 scope.go:117] "RemoveContainer" containerID="2ed74e5453c0e8a73deb9cc35bac1c64e50e0f8e215edd599f389d88297337af" Jan 30 22:07:22 crc kubenswrapper[4869]: I0130 22:07:22.274743 4869 scope.go:117] "RemoveContainer" containerID="a59c1b925edfa20a4b13364d7a4e1a73764308a5622726b53bfb9a145ab327d7" Jan 30 22:07:22 crc kubenswrapper[4869]: I0130 22:07:22.293289 4869 scope.go:117] "RemoveContainer" containerID="7889bc5a6aa75dbc449011f470d4c3f678a205cf9a7327b47403ac461525a781" Jan 30 22:07:22 crc kubenswrapper[4869]: I0130 22:07:22.357042 4869 scope.go:117] "RemoveContainer" containerID="a54c43c7b8fb82e6c2590299b83ff279e89f34ddacde303e3383dcb4ff22f03e" Jan 30 22:07:22 crc kubenswrapper[4869]: I0130 22:07:22.377660 4869 scope.go:117] "RemoveContainer" containerID="e151a62b18056d6f7ec07e05338c6bce5afb27a9a3710a00f7e4ad8332cdbca3" Jan 30 22:07:22 crc kubenswrapper[4869]: I0130 22:07:22.416094 4869 scope.go:117] "RemoveContainer" containerID="b8992e03162149a5d8af20dd1efd5d35b16b03ca5a907a456d80341f03472e03" Jan 30 22:07:22 crc kubenswrapper[4869]: I0130 22:07:22.475052 4869 scope.go:117] "RemoveContainer" containerID="1d21d7b187c75e362e31c6f333629885316f317de229c557269e00631d9659d8" Jan 30 22:07:31 crc kubenswrapper[4869]: I0130 22:07:31.990743 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:07:31 crc kubenswrapper[4869]: I0130 22:07:31.991367 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:08:01 crc kubenswrapper[4869]: I0130 22:08:01.990283 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:08:01 crc kubenswrapper[4869]: I0130 22:08:01.990851 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:08:01 crc kubenswrapper[4869]: I0130 22:08:01.990897 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 22:08:01 crc kubenswrapper[4869]: I0130 22:08:01.991720 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ffcbacd9fdeb4349443b9f613e5c78dd198a3778ae8b5b04d896ffb86351bb7"} pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:08:01 crc kubenswrapper[4869]: I0130 22:08:01.991778 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" containerID="cri-o://7ffcbacd9fdeb4349443b9f613e5c78dd198a3778ae8b5b04d896ffb86351bb7" gracePeriod=600 Jan 30 22:08:02 crc kubenswrapper[4869]: I0130 22:08:02.541102 4869 generic.go:334] "Generic (PLEG): container finished" podID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerID="7ffcbacd9fdeb4349443b9f613e5c78dd198a3778ae8b5b04d896ffb86351bb7" exitCode=0 Jan 30 22:08:02 crc kubenswrapper[4869]: I0130 22:08:02.541167 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" event={"ID":"b6fc0664-5e80-440d-a6e8-4189cdf5c500","Type":"ContainerDied","Data":"7ffcbacd9fdeb4349443b9f613e5c78dd198a3778ae8b5b04d896ffb86351bb7"} Jan 30 22:08:02 crc kubenswrapper[4869]: I0130 22:08:02.541478 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" event={"ID":"b6fc0664-5e80-440d-a6e8-4189cdf5c500","Type":"ContainerStarted","Data":"37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f"} Jan 30 22:08:02 crc kubenswrapper[4869]: I0130 22:08:02.541501 4869 scope.go:117] "RemoveContainer" containerID="fb268ebdb9eedb26a9652217f7a5aa752de4c3f089acc9c91036b9bb0160a969" Jan 30 22:08:16 crc kubenswrapper[4869]: I0130 22:08:16.623020 4869 generic.go:334] "Generic (PLEG): container finished" podID="79d4e833-512b-406a-b10a-b24fa5385330" containerID="77f339fcd5e04ff9e6e64c873351252dd0731032a91e84e6947ec00d49fdfda5" exitCode=0 Jan 30 22:08:16 crc kubenswrapper[4869]: I0130 22:08:16.623098 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g8lrq/must-gather-2dbxg" event={"ID":"79d4e833-512b-406a-b10a-b24fa5385330","Type":"ContainerDied","Data":"77f339fcd5e04ff9e6e64c873351252dd0731032a91e84e6947ec00d49fdfda5"} Jan 30 22:08:16 crc kubenswrapper[4869]: I0130 22:08:16.624192 4869 scope.go:117] "RemoveContainer" containerID="77f339fcd5e04ff9e6e64c873351252dd0731032a91e84e6947ec00d49fdfda5" Jan 30 22:08:17 crc kubenswrapper[4869]: I0130 22:08:17.060209 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g8lrq_must-gather-2dbxg_79d4e833-512b-406a-b10a-b24fa5385330/gather/0.log" Jan 30 22:08:22 crc kubenswrapper[4869]: I0130 22:08:22.584545 4869 scope.go:117] "RemoveContainer" containerID="acbb5e1c647f39750781d15018efb6f5b4a76bca075543b686f384d0c985155e" Jan 30 22:08:22 crc kubenswrapper[4869]: I0130 22:08:22.613031 4869 scope.go:117] "RemoveContainer" containerID="c757126fd6532f4cf5c4bb163ce8f6996b20ad2869c1493e2a0c112450bb386f" Jan 30 22:08:22 crc kubenswrapper[4869]: I0130 22:08:22.659474 4869 scope.go:117] "RemoveContainer" containerID="ed844f52c3dec4da536f571975e88d08b9d65c667fbc17cff09dfa92f9de7a5a" Jan 30 22:08:23 crc kubenswrapper[4869]: I0130 22:08:23.546621 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-g8lrq/must-gather-2dbxg"] Jan 30 22:08:23 crc kubenswrapper[4869]: I0130 22:08:23.547182 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-g8lrq/must-gather-2dbxg" podUID="79d4e833-512b-406a-b10a-b24fa5385330" containerName="copy" containerID="cri-o://aecb3bf74d890579bff2597c9eee782c6a20d8a4ac4c610809feebe323f5f65f" gracePeriod=2 Jan 30 22:08:23 crc kubenswrapper[4869]: I0130 22:08:23.549849 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-g8lrq/must-gather-2dbxg"] Jan 30 22:08:23 crc kubenswrapper[4869]: I0130 22:08:23.664713 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g8lrq_must-gather-2dbxg_79d4e833-512b-406a-b10a-b24fa5385330/copy/0.log" Jan 30 22:08:23 crc kubenswrapper[4869]: I0130 22:08:23.665074 4869 generic.go:334] "Generic (PLEG): container finished" podID="79d4e833-512b-406a-b10a-b24fa5385330" containerID="aecb3bf74d890579bff2597c9eee782c6a20d8a4ac4c610809feebe323f5f65f" exitCode=143 Jan 30 22:08:23 crc kubenswrapper[4869]: I0130 22:08:23.961575 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g8lrq_must-gather-2dbxg_79d4e833-512b-406a-b10a-b24fa5385330/copy/0.log" Jan 30 22:08:23 crc kubenswrapper[4869]: I0130 22:08:23.962023 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g8lrq/must-gather-2dbxg" Jan 30 22:08:24 crc kubenswrapper[4869]: I0130 22:08:24.157055 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/79d4e833-512b-406a-b10a-b24fa5385330-must-gather-output\") pod \"79d4e833-512b-406a-b10a-b24fa5385330\" (UID: \"79d4e833-512b-406a-b10a-b24fa5385330\") " Jan 30 22:08:24 crc kubenswrapper[4869]: I0130 22:08:24.157137 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwkhk\" (UniqueName: \"kubernetes.io/projected/79d4e833-512b-406a-b10a-b24fa5385330-kube-api-access-wwkhk\") pod \"79d4e833-512b-406a-b10a-b24fa5385330\" (UID: \"79d4e833-512b-406a-b10a-b24fa5385330\") " Jan 30 22:08:24 crc kubenswrapper[4869]: I0130 22:08:24.165787 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d4e833-512b-406a-b10a-b24fa5385330-kube-api-access-wwkhk" (OuterVolumeSpecName: "kube-api-access-wwkhk") pod "79d4e833-512b-406a-b10a-b24fa5385330" (UID: "79d4e833-512b-406a-b10a-b24fa5385330"). InnerVolumeSpecName "kube-api-access-wwkhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:08:24 crc kubenswrapper[4869]: I0130 22:08:24.224387 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79d4e833-512b-406a-b10a-b24fa5385330-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "79d4e833-512b-406a-b10a-b24fa5385330" (UID: "79d4e833-512b-406a-b10a-b24fa5385330"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:08:24 crc kubenswrapper[4869]: I0130 22:08:24.258354 4869 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/79d4e833-512b-406a-b10a-b24fa5385330-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 22:08:24 crc kubenswrapper[4869]: I0130 22:08:24.258389 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwkhk\" (UniqueName: \"kubernetes.io/projected/79d4e833-512b-406a-b10a-b24fa5385330-kube-api-access-wwkhk\") on node \"crc\" DevicePath \"\"" Jan 30 22:08:24 crc kubenswrapper[4869]: I0130 22:08:24.671664 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g8lrq_must-gather-2dbxg_79d4e833-512b-406a-b10a-b24fa5385330/copy/0.log" Jan 30 22:08:24 crc kubenswrapper[4869]: I0130 22:08:24.672084 4869 scope.go:117] "RemoveContainer" containerID="aecb3bf74d890579bff2597c9eee782c6a20d8a4ac4c610809feebe323f5f65f" Jan 30 22:08:24 crc kubenswrapper[4869]: I0130 22:08:24.672110 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g8lrq/must-gather-2dbxg" Jan 30 22:08:24 crc kubenswrapper[4869]: I0130 22:08:24.740983 4869 scope.go:117] "RemoveContainer" containerID="77f339fcd5e04ff9e6e64c873351252dd0731032a91e84e6947ec00d49fdfda5" Jan 30 22:08:25 crc kubenswrapper[4869]: I0130 22:08:25.884028 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d4e833-512b-406a-b10a-b24fa5385330" path="/var/lib/kubelet/pods/79d4e833-512b-406a-b10a-b24fa5385330/volumes" Jan 30 22:09:40 crc kubenswrapper[4869]: I0130 22:09:40.487094 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nw6zb"] Jan 30 22:09:40 crc kubenswrapper[4869]: E0130 22:09:40.487881 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d4e833-512b-406a-b10a-b24fa5385330" containerName="copy" Jan 30 22:09:40 crc kubenswrapper[4869]: I0130 22:09:40.487912 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d4e833-512b-406a-b10a-b24fa5385330" containerName="copy" Jan 30 22:09:40 crc kubenswrapper[4869]: E0130 22:09:40.487926 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d4e833-512b-406a-b10a-b24fa5385330" containerName="gather" Jan 30 22:09:40 crc kubenswrapper[4869]: I0130 22:09:40.487934 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d4e833-512b-406a-b10a-b24fa5385330" containerName="gather" Jan 30 22:09:40 crc kubenswrapper[4869]: I0130 22:09:40.488060 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d4e833-512b-406a-b10a-b24fa5385330" containerName="copy" Jan 30 22:09:40 crc kubenswrapper[4869]: I0130 22:09:40.488072 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d4e833-512b-406a-b10a-b24fa5385330" containerName="gather" Jan 30 22:09:40 crc kubenswrapper[4869]: I0130 22:09:40.488974 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nw6zb" Jan 30 22:09:40 crc kubenswrapper[4869]: I0130 22:09:40.513479 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nw6zb"] Jan 30 22:09:40 crc kubenswrapper[4869]: I0130 22:09:40.615694 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893-utilities\") pod \"certified-operators-nw6zb\" (UID: \"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893\") " pod="openshift-marketplace/certified-operators-nw6zb" Jan 30 22:09:40 crc kubenswrapper[4869]: I0130 22:09:40.615755 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwdjv\" (UniqueName: \"kubernetes.io/projected/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893-kube-api-access-pwdjv\") pod \"certified-operators-nw6zb\" (UID: \"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893\") " pod="openshift-marketplace/certified-operators-nw6zb" Jan 30 22:09:40 crc kubenswrapper[4869]: I0130 22:09:40.615853 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893-catalog-content\") pod \"certified-operators-nw6zb\" (UID: \"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893\") " pod="openshift-marketplace/certified-operators-nw6zb" Jan 30 22:09:40 crc kubenswrapper[4869]: I0130 22:09:40.716747 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893-utilities\") pod \"certified-operators-nw6zb\" (UID: \"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893\") " pod="openshift-marketplace/certified-operators-nw6zb" Jan 30 22:09:40 crc kubenswrapper[4869]: I0130 22:09:40.716800 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwdjv\" (UniqueName: \"kubernetes.io/projected/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893-kube-api-access-pwdjv\") pod \"certified-operators-nw6zb\" (UID: \"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893\") " pod="openshift-marketplace/certified-operators-nw6zb" Jan 30 22:09:40 crc kubenswrapper[4869]: I0130 22:09:40.716835 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893-catalog-content\") pod \"certified-operators-nw6zb\" (UID: \"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893\") " pod="openshift-marketplace/certified-operators-nw6zb" Jan 30 22:09:40 crc kubenswrapper[4869]: I0130 22:09:40.717405 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893-catalog-content\") pod \"certified-operators-nw6zb\" (UID: \"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893\") " pod="openshift-marketplace/certified-operators-nw6zb" Jan 30 22:09:40 crc kubenswrapper[4869]: I0130 22:09:40.717623 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893-utilities\") pod \"certified-operators-nw6zb\" (UID: \"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893\") " pod="openshift-marketplace/certified-operators-nw6zb" Jan 30 22:09:40 crc kubenswrapper[4869]: I0130 22:09:40.742972 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwdjv\" (UniqueName: \"kubernetes.io/projected/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893-kube-api-access-pwdjv\") pod \"certified-operators-nw6zb\" (UID: \"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893\") " pod="openshift-marketplace/certified-operators-nw6zb" Jan 30 22:09:40 crc kubenswrapper[4869]: I0130 22:09:40.809054 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nw6zb" Jan 30 22:09:41 crc kubenswrapper[4869]: I0130 22:09:41.289165 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nw6zb"] Jan 30 22:09:41 crc kubenswrapper[4869]: W0130 22:09:41.296637 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecc8fa8c_5b74_42ff_a375_fda7cc0dc893.slice/crio-7eff0e482fb98983b53d846d3838cf156eea5ea333c0277f0c4d2ebe2f40ad5e WatchSource:0}: Error finding container 7eff0e482fb98983b53d846d3838cf156eea5ea333c0277f0c4d2ebe2f40ad5e: Status 404 returned error can't find the container with id 7eff0e482fb98983b53d846d3838cf156eea5ea333c0277f0c4d2ebe2f40ad5e Jan 30 22:09:42 crc kubenswrapper[4869]: I0130 22:09:42.136856 4869 generic.go:334] "Generic (PLEG): container finished" podID="ecc8fa8c-5b74-42ff-a375-fda7cc0dc893" containerID="5d64d198682afdca68d71c64a8df1d20bb6015a1907cae184c4db4b727fa332d" exitCode=0 Jan 30 22:09:42 crc kubenswrapper[4869]: I0130 22:09:42.136937 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nw6zb" event={"ID":"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893","Type":"ContainerDied","Data":"5d64d198682afdca68d71c64a8df1d20bb6015a1907cae184c4db4b727fa332d"} Jan 30 22:09:42 crc kubenswrapper[4869]: I0130 22:09:42.136969 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nw6zb" event={"ID":"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893","Type":"ContainerStarted","Data":"7eff0e482fb98983b53d846d3838cf156eea5ea333c0277f0c4d2ebe2f40ad5e"} Jan 30 22:09:42 crc kubenswrapper[4869]: I0130 22:09:42.138908 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:09:43 crc kubenswrapper[4869]: I0130 22:09:43.147236 4869 generic.go:334] "Generic (PLEG): container finished" podID="ecc8fa8c-5b74-42ff-a375-fda7cc0dc893" containerID="1524dc4fafe86bc0bf1f905b5ee0b5232ce21604777a32ddea05e3fde5c129b6" exitCode=0 Jan 30 22:09:43 crc kubenswrapper[4869]: I0130 22:09:43.147368 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nw6zb" event={"ID":"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893","Type":"ContainerDied","Data":"1524dc4fafe86bc0bf1f905b5ee0b5232ce21604777a32ddea05e3fde5c129b6"} Jan 30 22:09:44 crc kubenswrapper[4869]: I0130 22:09:44.158523 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nw6zb" event={"ID":"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893","Type":"ContainerStarted","Data":"7da4979c8527f4e4abf463ade7d29c9856ef528b7b49b446900c3c7f86d2ee04"} Jan 30 22:09:44 crc kubenswrapper[4869]: I0130 22:09:44.178981 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nw6zb" podStartSLOduration=2.721607644 podStartE2EDuration="4.178954086s" podCreationTimestamp="2026-01-30 22:09:40 +0000 UTC" firstStartedPulling="2026-01-30 22:09:42.13858334 +0000 UTC m=+1583.024341355" lastFinishedPulling="2026-01-30 22:09:43.595929772 +0000 UTC m=+1584.481687797" observedRunningTime="2026-01-30 22:09:44.174194206 +0000 UTC m=+1585.059952231" watchObservedRunningTime="2026-01-30 22:09:44.178954086 +0000 UTC m=+1585.064712111" Jan 30 22:09:47 crc kubenswrapper[4869]: I0130 22:09:47.484785 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mj7f7"] Jan 30 22:09:47 crc kubenswrapper[4869]: I0130 22:09:47.486656 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mj7f7" Jan 30 22:09:47 crc kubenswrapper[4869]: I0130 22:09:47.496709 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj7f7"] Jan 30 22:09:47 crc kubenswrapper[4869]: I0130 22:09:47.607864 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921fc9d1-6818-43b8-bd85-2f23b67fa8f5-catalog-content\") pod \"redhat-marketplace-mj7f7\" (UID: \"921fc9d1-6818-43b8-bd85-2f23b67fa8f5\") " pod="openshift-marketplace/redhat-marketplace-mj7f7" Jan 30 22:09:47 crc kubenswrapper[4869]: I0130 22:09:47.608026 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921fc9d1-6818-43b8-bd85-2f23b67fa8f5-utilities\") pod \"redhat-marketplace-mj7f7\" (UID: \"921fc9d1-6818-43b8-bd85-2f23b67fa8f5\") " pod="openshift-marketplace/redhat-marketplace-mj7f7" Jan 30 22:09:47 crc kubenswrapper[4869]: I0130 22:09:47.608136 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwsnk\" (UniqueName: \"kubernetes.io/projected/921fc9d1-6818-43b8-bd85-2f23b67fa8f5-kube-api-access-cwsnk\") pod \"redhat-marketplace-mj7f7\" (UID: \"921fc9d1-6818-43b8-bd85-2f23b67fa8f5\") " pod="openshift-marketplace/redhat-marketplace-mj7f7" Jan 30 22:09:47 crc kubenswrapper[4869]: I0130 22:09:47.709118 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921fc9d1-6818-43b8-bd85-2f23b67fa8f5-catalog-content\") pod \"redhat-marketplace-mj7f7\" (UID: \"921fc9d1-6818-43b8-bd85-2f23b67fa8f5\") " pod="openshift-marketplace/redhat-marketplace-mj7f7" Jan 30 22:09:47 crc kubenswrapper[4869]: I0130 22:09:47.709177 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921fc9d1-6818-43b8-bd85-2f23b67fa8f5-utilities\") pod \"redhat-marketplace-mj7f7\" (UID: \"921fc9d1-6818-43b8-bd85-2f23b67fa8f5\") " pod="openshift-marketplace/redhat-marketplace-mj7f7" Jan 30 22:09:47 crc kubenswrapper[4869]: I0130 22:09:47.709268 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwsnk\" (UniqueName: \"kubernetes.io/projected/921fc9d1-6818-43b8-bd85-2f23b67fa8f5-kube-api-access-cwsnk\") pod \"redhat-marketplace-mj7f7\" (UID: \"921fc9d1-6818-43b8-bd85-2f23b67fa8f5\") " pod="openshift-marketplace/redhat-marketplace-mj7f7" Jan 30 22:09:47 crc kubenswrapper[4869]: I0130 22:09:47.709615 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921fc9d1-6818-43b8-bd85-2f23b67fa8f5-utilities\") pod \"redhat-marketplace-mj7f7\" (UID: \"921fc9d1-6818-43b8-bd85-2f23b67fa8f5\") " pod="openshift-marketplace/redhat-marketplace-mj7f7" Jan 30 22:09:47 crc kubenswrapper[4869]: I0130 22:09:47.709723 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921fc9d1-6818-43b8-bd85-2f23b67fa8f5-catalog-content\") pod \"redhat-marketplace-mj7f7\" (UID: \"921fc9d1-6818-43b8-bd85-2f23b67fa8f5\") " pod="openshift-marketplace/redhat-marketplace-mj7f7" Jan 30 22:09:47 crc kubenswrapper[4869]: I0130 22:09:47.733097 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwsnk\" (UniqueName: \"kubernetes.io/projected/921fc9d1-6818-43b8-bd85-2f23b67fa8f5-kube-api-access-cwsnk\") pod \"redhat-marketplace-mj7f7\" (UID: \"921fc9d1-6818-43b8-bd85-2f23b67fa8f5\") " pod="openshift-marketplace/redhat-marketplace-mj7f7" Jan 30 22:09:47 crc kubenswrapper[4869]: I0130 22:09:47.816795 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mj7f7" Jan 30 22:09:48 crc kubenswrapper[4869]: I0130 22:09:48.032071 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj7f7"] Jan 30 22:09:48 crc kubenswrapper[4869]: W0130 22:09:48.037097 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod921fc9d1_6818_43b8_bd85_2f23b67fa8f5.slice/crio-6abe8af3ca50f0422d95faf181df6b840b2fbf025594bd9d81657eb99417d823 WatchSource:0}: Error finding container 6abe8af3ca50f0422d95faf181df6b840b2fbf025594bd9d81657eb99417d823: Status 404 returned error can't find the container with id 6abe8af3ca50f0422d95faf181df6b840b2fbf025594bd9d81657eb99417d823 Jan 30 22:09:48 crc kubenswrapper[4869]: I0130 22:09:48.183023 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj7f7" event={"ID":"921fc9d1-6818-43b8-bd85-2f23b67fa8f5","Type":"ContainerStarted","Data":"6abe8af3ca50f0422d95faf181df6b840b2fbf025594bd9d81657eb99417d823"} Jan 30 22:09:49 crc kubenswrapper[4869]: I0130 22:09:49.189683 4869 generic.go:334] "Generic (PLEG): container finished" podID="921fc9d1-6818-43b8-bd85-2f23b67fa8f5" containerID="af1955ed3798eccb73511ba8dcb8c0e7018da1b6c2d01a7c642dfa20e527e006" exitCode=0 Jan 30 22:09:49 crc kubenswrapper[4869]: I0130 22:09:49.189746 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj7f7" event={"ID":"921fc9d1-6818-43b8-bd85-2f23b67fa8f5","Type":"ContainerDied","Data":"af1955ed3798eccb73511ba8dcb8c0e7018da1b6c2d01a7c642dfa20e527e006"} Jan 30 22:09:50 crc kubenswrapper[4869]: I0130 22:09:50.810381 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nw6zb" Jan 30 22:09:50 crc kubenswrapper[4869]: I0130 22:09:50.810442 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nw6zb" Jan 30 22:09:50 crc kubenswrapper[4869]: I0130 22:09:50.857051 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nw6zb" Jan 30 22:09:51 crc kubenswrapper[4869]: I0130 22:09:51.201251 4869 generic.go:334] "Generic (PLEG): container finished" podID="921fc9d1-6818-43b8-bd85-2f23b67fa8f5" containerID="aacb8f1c738541bee178ee097dcd066e569c2b8fcda496982808d9d94991b2a3" exitCode=0 Jan 30 22:09:51 crc kubenswrapper[4869]: I0130 22:09:51.201446 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj7f7" event={"ID":"921fc9d1-6818-43b8-bd85-2f23b67fa8f5","Type":"ContainerDied","Data":"aacb8f1c738541bee178ee097dcd066e569c2b8fcda496982808d9d94991b2a3"} Jan 30 22:09:51 crc kubenswrapper[4869]: I0130 22:09:51.243035 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nw6zb" Jan 30 22:09:52 crc kubenswrapper[4869]: I0130 22:09:52.209553 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj7f7" event={"ID":"921fc9d1-6818-43b8-bd85-2f23b67fa8f5","Type":"ContainerStarted","Data":"4bfb097f26576adc8133cb8b67cda3ac87c8af16e5c1a82a32c5975978d6150f"} Jan 30 22:09:52 crc kubenswrapper[4869]: I0130 22:09:52.231175 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mj7f7" podStartSLOduration=2.6847273 podStartE2EDuration="5.231153887s" podCreationTimestamp="2026-01-30 22:09:47 +0000 UTC" firstStartedPulling="2026-01-30 22:09:49.191084527 +0000 UTC m=+1590.076842562" lastFinishedPulling="2026-01-30 22:09:51.737511124 +0000 UTC m=+1592.623269149" observedRunningTime="2026-01-30 22:09:52.226107438 +0000 UTC m=+1593.111865463" watchObservedRunningTime="2026-01-30 22:09:52.231153887 +0000 UTC m=+1593.116911912" Jan 30 22:09:52 crc kubenswrapper[4869]: I0130 22:09:52.862157 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nw6zb"] Jan 30 22:09:53 crc kubenswrapper[4869]: I0130 22:09:53.219715 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nw6zb" podUID="ecc8fa8c-5b74-42ff-a375-fda7cc0dc893" containerName="registry-server" containerID="cri-o://7da4979c8527f4e4abf463ade7d29c9856ef528b7b49b446900c3c7f86d2ee04" gracePeriod=2 Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.072184 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nw6zb" Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.205414 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893-utilities\") pod \"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893\" (UID: \"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893\") " Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.205534 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893-catalog-content\") pod \"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893\" (UID: \"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893\") " Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.205607 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwdjv\" (UniqueName: \"kubernetes.io/projected/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893-kube-api-access-pwdjv\") pod \"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893\" (UID: \"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893\") " Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.206220 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893-utilities" (OuterVolumeSpecName: "utilities") pod "ecc8fa8c-5b74-42ff-a375-fda7cc0dc893" (UID: "ecc8fa8c-5b74-42ff-a375-fda7cc0dc893"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.211332 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893-kube-api-access-pwdjv" (OuterVolumeSpecName: "kube-api-access-pwdjv") pod "ecc8fa8c-5b74-42ff-a375-fda7cc0dc893" (UID: "ecc8fa8c-5b74-42ff-a375-fda7cc0dc893"). InnerVolumeSpecName "kube-api-access-pwdjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.236526 4869 generic.go:334] "Generic (PLEG): container finished" podID="ecc8fa8c-5b74-42ff-a375-fda7cc0dc893" containerID="7da4979c8527f4e4abf463ade7d29c9856ef528b7b49b446900c3c7f86d2ee04" exitCode=0 Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.236571 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nw6zb" event={"ID":"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893","Type":"ContainerDied","Data":"7da4979c8527f4e4abf463ade7d29c9856ef528b7b49b446900c3c7f86d2ee04"} Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.236606 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nw6zb" event={"ID":"ecc8fa8c-5b74-42ff-a375-fda7cc0dc893","Type":"ContainerDied","Data":"7eff0e482fb98983b53d846d3838cf156eea5ea333c0277f0c4d2ebe2f40ad5e"} Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.236626 4869 scope.go:117] "RemoveContainer" containerID="7da4979c8527f4e4abf463ade7d29c9856ef528b7b49b446900c3c7f86d2ee04" Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.236656 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nw6zb" Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.259043 4869 scope.go:117] "RemoveContainer" containerID="1524dc4fafe86bc0bf1f905b5ee0b5232ce21604777a32ddea05e3fde5c129b6" Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.290237 4869 scope.go:117] "RemoveContainer" containerID="5d64d198682afdca68d71c64a8df1d20bb6015a1907cae184c4db4b727fa332d" Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.293708 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecc8fa8c-5b74-42ff-a375-fda7cc0dc893" (UID: "ecc8fa8c-5b74-42ff-a375-fda7cc0dc893"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.306774 4869 scope.go:117] "RemoveContainer" containerID="7da4979c8527f4e4abf463ade7d29c9856ef528b7b49b446900c3c7f86d2ee04" Jan 30 22:09:55 crc kubenswrapper[4869]: E0130 22:09:55.307404 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7da4979c8527f4e4abf463ade7d29c9856ef528b7b49b446900c3c7f86d2ee04\": container with ID starting with 7da4979c8527f4e4abf463ade7d29c9856ef528b7b49b446900c3c7f86d2ee04 not found: ID does not exist" containerID="7da4979c8527f4e4abf463ade7d29c9856ef528b7b49b446900c3c7f86d2ee04" Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.307463 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7da4979c8527f4e4abf463ade7d29c9856ef528b7b49b446900c3c7f86d2ee04"} err="failed to get container status \"7da4979c8527f4e4abf463ade7d29c9856ef528b7b49b446900c3c7f86d2ee04\": rpc error: code = NotFound desc = could not find container \"7da4979c8527f4e4abf463ade7d29c9856ef528b7b49b446900c3c7f86d2ee04\": container with ID starting with 7da4979c8527f4e4abf463ade7d29c9856ef528b7b49b446900c3c7f86d2ee04 not found: ID does not exist" Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.307495 4869 scope.go:117] "RemoveContainer" containerID="1524dc4fafe86bc0bf1f905b5ee0b5232ce21604777a32ddea05e3fde5c129b6" Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.307785 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.307819 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.307838 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwdjv\" (UniqueName: \"kubernetes.io/projected/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893-kube-api-access-pwdjv\") on node \"crc\" DevicePath \"\"" Jan 30 22:09:55 crc kubenswrapper[4869]: E0130 22:09:55.307883 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1524dc4fafe86bc0bf1f905b5ee0b5232ce21604777a32ddea05e3fde5c129b6\": container with ID starting with 1524dc4fafe86bc0bf1f905b5ee0b5232ce21604777a32ddea05e3fde5c129b6 not found: ID does not exist" containerID="1524dc4fafe86bc0bf1f905b5ee0b5232ce21604777a32ddea05e3fde5c129b6" Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.307939 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1524dc4fafe86bc0bf1f905b5ee0b5232ce21604777a32ddea05e3fde5c129b6"} err="failed to get container status \"1524dc4fafe86bc0bf1f905b5ee0b5232ce21604777a32ddea05e3fde5c129b6\": rpc error: code = NotFound desc = could not find container \"1524dc4fafe86bc0bf1f905b5ee0b5232ce21604777a32ddea05e3fde5c129b6\": container with ID starting with 1524dc4fafe86bc0bf1f905b5ee0b5232ce21604777a32ddea05e3fde5c129b6 not found: ID does not exist" Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.307968 4869 scope.go:117] "RemoveContainer" containerID="5d64d198682afdca68d71c64a8df1d20bb6015a1907cae184c4db4b727fa332d" Jan 30 22:09:55 crc kubenswrapper[4869]: E0130 22:09:55.308320 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d64d198682afdca68d71c64a8df1d20bb6015a1907cae184c4db4b727fa332d\": container with ID starting with 5d64d198682afdca68d71c64a8df1d20bb6015a1907cae184c4db4b727fa332d not found: ID does not exist" containerID="5d64d198682afdca68d71c64a8df1d20bb6015a1907cae184c4db4b727fa332d" Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.308381 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d64d198682afdca68d71c64a8df1d20bb6015a1907cae184c4db4b727fa332d"} err="failed to get container status \"5d64d198682afdca68d71c64a8df1d20bb6015a1907cae184c4db4b727fa332d\": rpc error: code = NotFound desc = could not find container \"5d64d198682afdca68d71c64a8df1d20bb6015a1907cae184c4db4b727fa332d\": container with ID starting with 5d64d198682afdca68d71c64a8df1d20bb6015a1907cae184c4db4b727fa332d not found: ID does not exist" Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.563375 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nw6zb"] Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.567743 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nw6zb"] Jan 30 22:09:55 crc kubenswrapper[4869]: I0130 22:09:55.884703 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecc8fa8c-5b74-42ff-a375-fda7cc0dc893" path="/var/lib/kubelet/pods/ecc8fa8c-5b74-42ff-a375-fda7cc0dc893/volumes" Jan 30 22:09:57 crc kubenswrapper[4869]: I0130 22:09:57.817001 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mj7f7" Jan 30 22:09:57 crc kubenswrapper[4869]: I0130 22:09:57.817083 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mj7f7" Jan 30 22:09:57 crc kubenswrapper[4869]: I0130 22:09:57.857152 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mj7f7" Jan 30 22:09:58 crc kubenswrapper[4869]: I0130 22:09:58.315092 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mj7f7" Jan 30 22:09:59 crc kubenswrapper[4869]: I0130 22:09:59.459302 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj7f7"] Jan 30 22:10:00 crc kubenswrapper[4869]: I0130 22:10:00.271888 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mj7f7" podUID="921fc9d1-6818-43b8-bd85-2f23b67fa8f5" containerName="registry-server" containerID="cri-o://4bfb097f26576adc8133cb8b67cda3ac87c8af16e5c1a82a32c5975978d6150f" gracePeriod=2 Jan 30 22:10:00 crc kubenswrapper[4869]: I0130 22:10:00.589518 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mj7f7" Jan 30 22:10:00 crc kubenswrapper[4869]: I0130 22:10:00.775926 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921fc9d1-6818-43b8-bd85-2f23b67fa8f5-catalog-content\") pod \"921fc9d1-6818-43b8-bd85-2f23b67fa8f5\" (UID: \"921fc9d1-6818-43b8-bd85-2f23b67fa8f5\") " Jan 30 22:10:00 crc kubenswrapper[4869]: I0130 22:10:00.775997 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921fc9d1-6818-43b8-bd85-2f23b67fa8f5-utilities\") pod \"921fc9d1-6818-43b8-bd85-2f23b67fa8f5\" (UID: \"921fc9d1-6818-43b8-bd85-2f23b67fa8f5\") " Jan 30 22:10:00 crc kubenswrapper[4869]: I0130 22:10:00.776071 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwsnk\" (UniqueName: \"kubernetes.io/projected/921fc9d1-6818-43b8-bd85-2f23b67fa8f5-kube-api-access-cwsnk\") pod \"921fc9d1-6818-43b8-bd85-2f23b67fa8f5\" (UID: \"921fc9d1-6818-43b8-bd85-2f23b67fa8f5\") " Jan 30 22:10:00 crc kubenswrapper[4869]: I0130 22:10:00.777252 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/921fc9d1-6818-43b8-bd85-2f23b67fa8f5-utilities" (OuterVolumeSpecName: "utilities") pod "921fc9d1-6818-43b8-bd85-2f23b67fa8f5" (UID: "921fc9d1-6818-43b8-bd85-2f23b67fa8f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:10:00 crc kubenswrapper[4869]: I0130 22:10:00.780936 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921fc9d1-6818-43b8-bd85-2f23b67fa8f5-kube-api-access-cwsnk" (OuterVolumeSpecName: "kube-api-access-cwsnk") pod "921fc9d1-6818-43b8-bd85-2f23b67fa8f5" (UID: "921fc9d1-6818-43b8-bd85-2f23b67fa8f5"). InnerVolumeSpecName "kube-api-access-cwsnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:10:00 crc kubenswrapper[4869]: I0130 22:10:00.801637 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/921fc9d1-6818-43b8-bd85-2f23b67fa8f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "921fc9d1-6818-43b8-bd85-2f23b67fa8f5" (UID: "921fc9d1-6818-43b8-bd85-2f23b67fa8f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:10:00 crc kubenswrapper[4869]: I0130 22:10:00.877476 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921fc9d1-6818-43b8-bd85-2f23b67fa8f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:10:00 crc kubenswrapper[4869]: I0130 22:10:00.877514 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921fc9d1-6818-43b8-bd85-2f23b67fa8f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:10:00 crc kubenswrapper[4869]: I0130 22:10:00.877527 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwsnk\" (UniqueName: \"kubernetes.io/projected/921fc9d1-6818-43b8-bd85-2f23b67fa8f5-kube-api-access-cwsnk\") on node \"crc\" DevicePath \"\"" Jan 30 22:10:01 crc kubenswrapper[4869]: I0130 22:10:01.282335 4869 generic.go:334] "Generic (PLEG): container finished" podID="921fc9d1-6818-43b8-bd85-2f23b67fa8f5" containerID="4bfb097f26576adc8133cb8b67cda3ac87c8af16e5c1a82a32c5975978d6150f" exitCode=0 Jan 30 22:10:01 crc kubenswrapper[4869]: I0130 22:10:01.282431 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj7f7" event={"ID":"921fc9d1-6818-43b8-bd85-2f23b67fa8f5","Type":"ContainerDied","Data":"4bfb097f26576adc8133cb8b67cda3ac87c8af16e5c1a82a32c5975978d6150f"} Jan 30 22:10:01 crc kubenswrapper[4869]: I0130 22:10:01.283139 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj7f7" event={"ID":"921fc9d1-6818-43b8-bd85-2f23b67fa8f5","Type":"ContainerDied","Data":"6abe8af3ca50f0422d95faf181df6b840b2fbf025594bd9d81657eb99417d823"} Jan 30 22:10:01 crc kubenswrapper[4869]: I0130 22:10:01.283210 4869 scope.go:117] "RemoveContainer" containerID="4bfb097f26576adc8133cb8b67cda3ac87c8af16e5c1a82a32c5975978d6150f" Jan 30 22:10:01 crc kubenswrapper[4869]: I0130 22:10:01.282440 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mj7f7" Jan 30 22:10:01 crc kubenswrapper[4869]: I0130 22:10:01.303345 4869 scope.go:117] "RemoveContainer" containerID="aacb8f1c738541bee178ee097dcd066e569c2b8fcda496982808d9d94991b2a3" Jan 30 22:10:01 crc kubenswrapper[4869]: I0130 22:10:01.316942 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj7f7"] Jan 30 22:10:01 crc kubenswrapper[4869]: I0130 22:10:01.319022 4869 scope.go:117] "RemoveContainer" containerID="af1955ed3798eccb73511ba8dcb8c0e7018da1b6c2d01a7c642dfa20e527e006" Jan 30 22:10:01 crc kubenswrapper[4869]: I0130 22:10:01.322870 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj7f7"] Jan 30 22:10:01 crc kubenswrapper[4869]: I0130 22:10:01.347780 4869 scope.go:117] "RemoveContainer" containerID="4bfb097f26576adc8133cb8b67cda3ac87c8af16e5c1a82a32c5975978d6150f" Jan 30 22:10:01 crc kubenswrapper[4869]: E0130 22:10:01.348351 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bfb097f26576adc8133cb8b67cda3ac87c8af16e5c1a82a32c5975978d6150f\": container with ID starting with 4bfb097f26576adc8133cb8b67cda3ac87c8af16e5c1a82a32c5975978d6150f not found: ID does not exist" containerID="4bfb097f26576adc8133cb8b67cda3ac87c8af16e5c1a82a32c5975978d6150f" Jan 30 22:10:01 crc kubenswrapper[4869]: I0130 22:10:01.348400 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bfb097f26576adc8133cb8b67cda3ac87c8af16e5c1a82a32c5975978d6150f"} err="failed to get container status \"4bfb097f26576adc8133cb8b67cda3ac87c8af16e5c1a82a32c5975978d6150f\": rpc error: code = NotFound desc = could not find container \"4bfb097f26576adc8133cb8b67cda3ac87c8af16e5c1a82a32c5975978d6150f\": container with ID starting with 4bfb097f26576adc8133cb8b67cda3ac87c8af16e5c1a82a32c5975978d6150f not found: ID does not exist" Jan 30 22:10:01 crc kubenswrapper[4869]: I0130 22:10:01.348431 4869 scope.go:117] "RemoveContainer" containerID="aacb8f1c738541bee178ee097dcd066e569c2b8fcda496982808d9d94991b2a3" Jan 30 22:10:01 crc kubenswrapper[4869]: E0130 22:10:01.348939 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aacb8f1c738541bee178ee097dcd066e569c2b8fcda496982808d9d94991b2a3\": container with ID starting with aacb8f1c738541bee178ee097dcd066e569c2b8fcda496982808d9d94991b2a3 not found: ID does not exist" containerID="aacb8f1c738541bee178ee097dcd066e569c2b8fcda496982808d9d94991b2a3" Jan 30 22:10:01 crc kubenswrapper[4869]: I0130 22:10:01.348966 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aacb8f1c738541bee178ee097dcd066e569c2b8fcda496982808d9d94991b2a3"} err="failed to get container status \"aacb8f1c738541bee178ee097dcd066e569c2b8fcda496982808d9d94991b2a3\": rpc error: code = NotFound desc = could not find container \"aacb8f1c738541bee178ee097dcd066e569c2b8fcda496982808d9d94991b2a3\": container with ID starting with aacb8f1c738541bee178ee097dcd066e569c2b8fcda496982808d9d94991b2a3 not found: ID does not exist" Jan 30 22:10:01 crc kubenswrapper[4869]: I0130 22:10:01.348980 4869 scope.go:117] "RemoveContainer" containerID="af1955ed3798eccb73511ba8dcb8c0e7018da1b6c2d01a7c642dfa20e527e006" Jan 30 22:10:01 crc kubenswrapper[4869]: E0130 22:10:01.349293 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af1955ed3798eccb73511ba8dcb8c0e7018da1b6c2d01a7c642dfa20e527e006\": container with ID starting with af1955ed3798eccb73511ba8dcb8c0e7018da1b6c2d01a7c642dfa20e527e006 not found: ID does not exist" containerID="af1955ed3798eccb73511ba8dcb8c0e7018da1b6c2d01a7c642dfa20e527e006" Jan 30 22:10:01 crc kubenswrapper[4869]: I0130 22:10:01.349320 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af1955ed3798eccb73511ba8dcb8c0e7018da1b6c2d01a7c642dfa20e527e006"} err="failed to get container status \"af1955ed3798eccb73511ba8dcb8c0e7018da1b6c2d01a7c642dfa20e527e006\": rpc error: code = NotFound desc = could not find container \"af1955ed3798eccb73511ba8dcb8c0e7018da1b6c2d01a7c642dfa20e527e006\": container with ID starting with af1955ed3798eccb73511ba8dcb8c0e7018da1b6c2d01a7c642dfa20e527e006 not found: ID does not exist" Jan 30 22:10:01 crc kubenswrapper[4869]: I0130 22:10:01.883075 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="921fc9d1-6818-43b8-bd85-2f23b67fa8f5" path="/var/lib/kubelet/pods/921fc9d1-6818-43b8-bd85-2f23b67fa8f5/volumes" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.702074 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4z8jv"] Jan 30 22:10:10 crc kubenswrapper[4869]: E0130 22:10:10.702824 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921fc9d1-6818-43b8-bd85-2f23b67fa8f5" containerName="registry-server" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.702841 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="921fc9d1-6818-43b8-bd85-2f23b67fa8f5" containerName="registry-server" Jan 30 22:10:10 crc kubenswrapper[4869]: E0130 22:10:10.702855 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc8fa8c-5b74-42ff-a375-fda7cc0dc893" containerName="registry-server" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.702862 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc8fa8c-5b74-42ff-a375-fda7cc0dc893" containerName="registry-server" Jan 30 22:10:10 crc kubenswrapper[4869]: E0130 22:10:10.702882 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc8fa8c-5b74-42ff-a375-fda7cc0dc893" containerName="extract-utilities" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.702907 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc8fa8c-5b74-42ff-a375-fda7cc0dc893" containerName="extract-utilities" Jan 30 22:10:10 crc kubenswrapper[4869]: E0130 22:10:10.702919 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921fc9d1-6818-43b8-bd85-2f23b67fa8f5" containerName="extract-utilities" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.702926 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="921fc9d1-6818-43b8-bd85-2f23b67fa8f5" containerName="extract-utilities" Jan 30 22:10:10 crc kubenswrapper[4869]: E0130 22:10:10.702939 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc8fa8c-5b74-42ff-a375-fda7cc0dc893" containerName="extract-content" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.702945 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc8fa8c-5b74-42ff-a375-fda7cc0dc893" containerName="extract-content" Jan 30 22:10:10 crc kubenswrapper[4869]: E0130 22:10:10.702955 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921fc9d1-6818-43b8-bd85-2f23b67fa8f5" containerName="extract-content" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.702963 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="921fc9d1-6818-43b8-bd85-2f23b67fa8f5" containerName="extract-content" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.703082 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc8fa8c-5b74-42ff-a375-fda7cc0dc893" containerName="registry-server" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.703099 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="921fc9d1-6818-43b8-bd85-2f23b67fa8f5" containerName="registry-server" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.704102 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z8jv" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.709929 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4z8jv"] Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.812460 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2475ba-463b-498a-af81-983d0a8b3174-utilities\") pod \"community-operators-4z8jv\" (UID: \"0c2475ba-463b-498a-af81-983d0a8b3174\") " pod="openshift-marketplace/community-operators-4z8jv" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.812814 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2475ba-463b-498a-af81-983d0a8b3174-catalog-content\") pod \"community-operators-4z8jv\" (UID: \"0c2475ba-463b-498a-af81-983d0a8b3174\") " pod="openshift-marketplace/community-operators-4z8jv" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.812938 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj4g7\" (UniqueName: \"kubernetes.io/projected/0c2475ba-463b-498a-af81-983d0a8b3174-kube-api-access-cj4g7\") pod \"community-operators-4z8jv\" (UID: \"0c2475ba-463b-498a-af81-983d0a8b3174\") " pod="openshift-marketplace/community-operators-4z8jv" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.913458 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj4g7\" (UniqueName: \"kubernetes.io/projected/0c2475ba-463b-498a-af81-983d0a8b3174-kube-api-access-cj4g7\") pod \"community-operators-4z8jv\" (UID: \"0c2475ba-463b-498a-af81-983d0a8b3174\") " pod="openshift-marketplace/community-operators-4z8jv" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.913554 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2475ba-463b-498a-af81-983d0a8b3174-utilities\") pod \"community-operators-4z8jv\" (UID: \"0c2475ba-463b-498a-af81-983d0a8b3174\") " pod="openshift-marketplace/community-operators-4z8jv" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.913575 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2475ba-463b-498a-af81-983d0a8b3174-catalog-content\") pod \"community-operators-4z8jv\" (UID: \"0c2475ba-463b-498a-af81-983d0a8b3174\") " pod="openshift-marketplace/community-operators-4z8jv" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.914235 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2475ba-463b-498a-af81-983d0a8b3174-utilities\") pod \"community-operators-4z8jv\" (UID: \"0c2475ba-463b-498a-af81-983d0a8b3174\") " pod="openshift-marketplace/community-operators-4z8jv" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.914302 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2475ba-463b-498a-af81-983d0a8b3174-catalog-content\") pod \"community-operators-4z8jv\" (UID: \"0c2475ba-463b-498a-af81-983d0a8b3174\") " pod="openshift-marketplace/community-operators-4z8jv" Jan 30 22:10:10 crc kubenswrapper[4869]: I0130 22:10:10.935316 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj4g7\" (UniqueName: \"kubernetes.io/projected/0c2475ba-463b-498a-af81-983d0a8b3174-kube-api-access-cj4g7\") pod \"community-operators-4z8jv\" (UID: \"0c2475ba-463b-498a-af81-983d0a8b3174\") " pod="openshift-marketplace/community-operators-4z8jv" Jan 30 22:10:11 crc kubenswrapper[4869]: I0130 22:10:11.023709 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z8jv" Jan 30 22:10:11 crc kubenswrapper[4869]: I0130 22:10:11.498094 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4z8jv"] Jan 30 22:10:12 crc kubenswrapper[4869]: I0130 22:10:12.348920 4869 generic.go:334] "Generic (PLEG): container finished" podID="0c2475ba-463b-498a-af81-983d0a8b3174" containerID="b8f32578d9e86574b14e19328d552a22cc86d93c85fd2375718f37a919ce674d" exitCode=0 Jan 30 22:10:12 crc kubenswrapper[4869]: I0130 22:10:12.349009 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z8jv" event={"ID":"0c2475ba-463b-498a-af81-983d0a8b3174","Type":"ContainerDied","Data":"b8f32578d9e86574b14e19328d552a22cc86d93c85fd2375718f37a919ce674d"} Jan 30 22:10:12 crc kubenswrapper[4869]: I0130 22:10:12.349309 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z8jv" event={"ID":"0c2475ba-463b-498a-af81-983d0a8b3174","Type":"ContainerStarted","Data":"daaba28141efc32048948f7d3be3487f59cb878cf3785471f9130802ea1d5d33"} Jan 30 22:10:13 crc kubenswrapper[4869]: I0130 22:10:13.356663 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z8jv" event={"ID":"0c2475ba-463b-498a-af81-983d0a8b3174","Type":"ContainerStarted","Data":"4a091c0400da083d58159825e8d66ac8ca0402ed847ec1ff4bf20564f3a7d9e3"} Jan 30 22:10:14 crc kubenswrapper[4869]: I0130 22:10:14.367910 4869 generic.go:334] "Generic (PLEG): container finished" podID="0c2475ba-463b-498a-af81-983d0a8b3174" containerID="4a091c0400da083d58159825e8d66ac8ca0402ed847ec1ff4bf20564f3a7d9e3" exitCode=0 Jan 30 22:10:14 crc kubenswrapper[4869]: I0130 22:10:14.368066 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z8jv" event={"ID":"0c2475ba-463b-498a-af81-983d0a8b3174","Type":"ContainerDied","Data":"4a091c0400da083d58159825e8d66ac8ca0402ed847ec1ff4bf20564f3a7d9e3"} Jan 30 22:10:15 crc kubenswrapper[4869]: I0130 22:10:15.377284 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z8jv" event={"ID":"0c2475ba-463b-498a-af81-983d0a8b3174","Type":"ContainerStarted","Data":"2aaf0adc7aad9bebb7d9070066e8350848af43d59beeb6584ff5180c692015dd"} Jan 30 22:10:15 crc kubenswrapper[4869]: I0130 22:10:15.398615 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4z8jv" podStartSLOduration=2.743587575 podStartE2EDuration="5.398587072s" podCreationTimestamp="2026-01-30 22:10:10 +0000 UTC" firstStartedPulling="2026-01-30 22:10:12.350753521 +0000 UTC m=+1613.236511546" lastFinishedPulling="2026-01-30 22:10:15.005753018 +0000 UTC m=+1615.891511043" observedRunningTime="2026-01-30 22:10:15.392766051 +0000 UTC m=+1616.278524086" watchObservedRunningTime="2026-01-30 22:10:15.398587072 +0000 UTC m=+1616.284345097" Jan 30 22:10:21 crc kubenswrapper[4869]: I0130 22:10:21.024750 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4z8jv" Jan 30 22:10:21 crc kubenswrapper[4869]: I0130 22:10:21.025455 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4z8jv" Jan 30 22:10:21 crc kubenswrapper[4869]: I0130 22:10:21.068578 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4z8jv" Jan 30 22:10:21 crc kubenswrapper[4869]: I0130 22:10:21.448875 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4z8jv" Jan 30 22:10:21 crc kubenswrapper[4869]: I0130 22:10:21.491088 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4z8jv"] Jan 30 22:10:22 crc kubenswrapper[4869]: I0130 22:10:22.724478 4869 scope.go:117] "RemoveContainer" containerID="681940ad68535c552ce6434a887f71e1eb9fc67fd5d9ee409b77190217e3ba77" Jan 30 22:10:22 crc kubenswrapper[4869]: I0130 22:10:22.775165 4869 scope.go:117] "RemoveContainer" containerID="1e1984c3f29e264a9568411e7643d013e896efadf4da3caebab064a8302bf46b" Jan 30 22:10:22 crc kubenswrapper[4869]: I0130 22:10:22.792630 4869 scope.go:117] "RemoveContainer" containerID="7650c35a76197836c7579269a8aca153e89b63434ae7f559706e7f94bb5949e7" Jan 30 22:10:22 crc kubenswrapper[4869]: I0130 22:10:22.816270 4869 scope.go:117] "RemoveContainer" containerID="3e6caf84787a54d79e5bf12ae642db88009e24419803d6a191b5375af5acbe69" Jan 30 22:10:23 crc kubenswrapper[4869]: I0130 22:10:23.423445 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4z8jv" podUID="0c2475ba-463b-498a-af81-983d0a8b3174" containerName="registry-server" containerID="cri-o://2aaf0adc7aad9bebb7d9070066e8350848af43d59beeb6584ff5180c692015dd" gracePeriod=2 Jan 30 22:10:23 crc kubenswrapper[4869]: I0130 22:10:23.731005 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z8jv" Jan 30 22:10:23 crc kubenswrapper[4869]: I0130 22:10:23.867258 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2475ba-463b-498a-af81-983d0a8b3174-utilities\") pod \"0c2475ba-463b-498a-af81-983d0a8b3174\" (UID: \"0c2475ba-463b-498a-af81-983d0a8b3174\") " Jan 30 22:10:23 crc kubenswrapper[4869]: I0130 22:10:23.867311 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj4g7\" (UniqueName: \"kubernetes.io/projected/0c2475ba-463b-498a-af81-983d0a8b3174-kube-api-access-cj4g7\") pod \"0c2475ba-463b-498a-af81-983d0a8b3174\" (UID: \"0c2475ba-463b-498a-af81-983d0a8b3174\") " Jan 30 22:10:23 crc kubenswrapper[4869]: I0130 22:10:23.867332 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2475ba-463b-498a-af81-983d0a8b3174-catalog-content\") pod \"0c2475ba-463b-498a-af81-983d0a8b3174\" (UID: \"0c2475ba-463b-498a-af81-983d0a8b3174\") " Jan 30 22:10:23 crc kubenswrapper[4869]: I0130 22:10:23.868370 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c2475ba-463b-498a-af81-983d0a8b3174-utilities" (OuterVolumeSpecName: "utilities") pod "0c2475ba-463b-498a-af81-983d0a8b3174" (UID: "0c2475ba-463b-498a-af81-983d0a8b3174"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:10:23 crc kubenswrapper[4869]: I0130 22:10:23.872420 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c2475ba-463b-498a-af81-983d0a8b3174-kube-api-access-cj4g7" (OuterVolumeSpecName: "kube-api-access-cj4g7") pod "0c2475ba-463b-498a-af81-983d0a8b3174" (UID: "0c2475ba-463b-498a-af81-983d0a8b3174"). InnerVolumeSpecName "kube-api-access-cj4g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:10:23 crc kubenswrapper[4869]: I0130 22:10:23.929035 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c2475ba-463b-498a-af81-983d0a8b3174-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c2475ba-463b-498a-af81-983d0a8b3174" (UID: "0c2475ba-463b-498a-af81-983d0a8b3174"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:10:23 crc kubenswrapper[4869]: I0130 22:10:23.968014 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2475ba-463b-498a-af81-983d0a8b3174-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:10:23 crc kubenswrapper[4869]: I0130 22:10:23.968055 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj4g7\" (UniqueName: \"kubernetes.io/projected/0c2475ba-463b-498a-af81-983d0a8b3174-kube-api-access-cj4g7\") on node \"crc\" DevicePath \"\"" Jan 30 22:10:23 crc kubenswrapper[4869]: I0130 22:10:23.968074 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2475ba-463b-498a-af81-983d0a8b3174-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:10:24 crc kubenswrapper[4869]: I0130 22:10:24.438271 4869 generic.go:334] "Generic (PLEG): container finished" podID="0c2475ba-463b-498a-af81-983d0a8b3174" containerID="2aaf0adc7aad9bebb7d9070066e8350848af43d59beeb6584ff5180c692015dd" exitCode=0 Jan 30 22:10:24 crc kubenswrapper[4869]: I0130 22:10:24.438336 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z8jv" event={"ID":"0c2475ba-463b-498a-af81-983d0a8b3174","Type":"ContainerDied","Data":"2aaf0adc7aad9bebb7d9070066e8350848af43d59beeb6584ff5180c692015dd"} Jan 30 22:10:24 crc kubenswrapper[4869]: I0130 22:10:24.438376 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z8jv" event={"ID":"0c2475ba-463b-498a-af81-983d0a8b3174","Type":"ContainerDied","Data":"daaba28141efc32048948f7d3be3487f59cb878cf3785471f9130802ea1d5d33"} Jan 30 22:10:24 crc kubenswrapper[4869]: I0130 22:10:24.438402 4869 scope.go:117] "RemoveContainer" containerID="2aaf0adc7aad9bebb7d9070066e8350848af43d59beeb6584ff5180c692015dd" Jan 30 22:10:24 crc kubenswrapper[4869]: I0130 22:10:24.438420 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z8jv" Jan 30 22:10:24 crc kubenswrapper[4869]: I0130 22:10:24.461623 4869 scope.go:117] "RemoveContainer" containerID="4a091c0400da083d58159825e8d66ac8ca0402ed847ec1ff4bf20564f3a7d9e3" Jan 30 22:10:24 crc kubenswrapper[4869]: I0130 22:10:24.468082 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4z8jv"] Jan 30 22:10:24 crc kubenswrapper[4869]: I0130 22:10:24.472086 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4z8jv"] Jan 30 22:10:24 crc kubenswrapper[4869]: I0130 22:10:24.503810 4869 scope.go:117] "RemoveContainer" containerID="b8f32578d9e86574b14e19328d552a22cc86d93c85fd2375718f37a919ce674d" Jan 30 22:10:24 crc kubenswrapper[4869]: I0130 22:10:24.521576 4869 scope.go:117] "RemoveContainer" containerID="2aaf0adc7aad9bebb7d9070066e8350848af43d59beeb6584ff5180c692015dd" Jan 30 22:10:24 crc kubenswrapper[4869]: E0130 22:10:24.522184 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aaf0adc7aad9bebb7d9070066e8350848af43d59beeb6584ff5180c692015dd\": container with ID starting with 2aaf0adc7aad9bebb7d9070066e8350848af43d59beeb6584ff5180c692015dd not found: ID does not exist" containerID="2aaf0adc7aad9bebb7d9070066e8350848af43d59beeb6584ff5180c692015dd" Jan 30 22:10:24 crc kubenswrapper[4869]: I0130 22:10:24.522240 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aaf0adc7aad9bebb7d9070066e8350848af43d59beeb6584ff5180c692015dd"} err="failed to get container status \"2aaf0adc7aad9bebb7d9070066e8350848af43d59beeb6584ff5180c692015dd\": rpc error: code = NotFound desc = could not find container \"2aaf0adc7aad9bebb7d9070066e8350848af43d59beeb6584ff5180c692015dd\": container with ID starting with 2aaf0adc7aad9bebb7d9070066e8350848af43d59beeb6584ff5180c692015dd not found: ID does not exist" Jan 30 22:10:24 crc kubenswrapper[4869]: I0130 22:10:24.522270 4869 scope.go:117] "RemoveContainer" containerID="4a091c0400da083d58159825e8d66ac8ca0402ed847ec1ff4bf20564f3a7d9e3" Jan 30 22:10:24 crc kubenswrapper[4869]: E0130 22:10:24.522723 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a091c0400da083d58159825e8d66ac8ca0402ed847ec1ff4bf20564f3a7d9e3\": container with ID starting with 4a091c0400da083d58159825e8d66ac8ca0402ed847ec1ff4bf20564f3a7d9e3 not found: ID does not exist" containerID="4a091c0400da083d58159825e8d66ac8ca0402ed847ec1ff4bf20564f3a7d9e3" Jan 30 22:10:24 crc kubenswrapper[4869]: I0130 22:10:24.522771 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a091c0400da083d58159825e8d66ac8ca0402ed847ec1ff4bf20564f3a7d9e3"} err="failed to get container status \"4a091c0400da083d58159825e8d66ac8ca0402ed847ec1ff4bf20564f3a7d9e3\": rpc error: code = NotFound desc = could not find container \"4a091c0400da083d58159825e8d66ac8ca0402ed847ec1ff4bf20564f3a7d9e3\": container with ID starting with 4a091c0400da083d58159825e8d66ac8ca0402ed847ec1ff4bf20564f3a7d9e3 not found: ID does not exist" Jan 30 22:10:24 crc kubenswrapper[4869]: I0130 22:10:24.522793 4869 scope.go:117] "RemoveContainer" containerID="b8f32578d9e86574b14e19328d552a22cc86d93c85fd2375718f37a919ce674d" Jan 30 22:10:24 crc kubenswrapper[4869]: E0130 22:10:24.523149 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8f32578d9e86574b14e19328d552a22cc86d93c85fd2375718f37a919ce674d\": container with ID starting with b8f32578d9e86574b14e19328d552a22cc86d93c85fd2375718f37a919ce674d not found: ID does not exist" containerID="b8f32578d9e86574b14e19328d552a22cc86d93c85fd2375718f37a919ce674d" Jan 30 22:10:24 crc kubenswrapper[4869]: I0130 22:10:24.523217 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8f32578d9e86574b14e19328d552a22cc86d93c85fd2375718f37a919ce674d"} err="failed to get container status \"b8f32578d9e86574b14e19328d552a22cc86d93c85fd2375718f37a919ce674d\": rpc error: code = NotFound desc = could not find container \"b8f32578d9e86574b14e19328d552a22cc86d93c85fd2375718f37a919ce674d\": container with ID starting with b8f32578d9e86574b14e19328d552a22cc86d93c85fd2375718f37a919ce674d not found: ID does not exist" Jan 30 22:10:25 crc kubenswrapper[4869]: I0130 22:10:25.887259 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c2475ba-463b-498a-af81-983d0a8b3174" path="/var/lib/kubelet/pods/0c2475ba-463b-498a-af81-983d0a8b3174/volumes" Jan 30 22:10:31 crc kubenswrapper[4869]: I0130 22:10:31.990984 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:10:31 crc kubenswrapper[4869]: I0130 22:10:31.991041 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:10:51 crc kubenswrapper[4869]: I0130 22:10:51.319371 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bqp4r/must-gather-j8pj4"] Jan 30 22:10:51 crc kubenswrapper[4869]: E0130 22:10:51.320260 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2475ba-463b-498a-af81-983d0a8b3174" containerName="registry-server" Jan 30 22:10:51 crc kubenswrapper[4869]: I0130 22:10:51.320278 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2475ba-463b-498a-af81-983d0a8b3174" containerName="registry-server" Jan 30 22:10:51 crc kubenswrapper[4869]: E0130 22:10:51.320304 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2475ba-463b-498a-af81-983d0a8b3174" containerName="extract-content" Jan 30 22:10:51 crc kubenswrapper[4869]: I0130 22:10:51.320311 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2475ba-463b-498a-af81-983d0a8b3174" containerName="extract-content" Jan 30 22:10:51 crc kubenswrapper[4869]: E0130 22:10:51.320325 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2475ba-463b-498a-af81-983d0a8b3174" containerName="extract-utilities" Jan 30 22:10:51 crc kubenswrapper[4869]: I0130 22:10:51.320335 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2475ba-463b-498a-af81-983d0a8b3174" containerName="extract-utilities" Jan 30 22:10:51 crc kubenswrapper[4869]: I0130 22:10:51.320468 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c2475ba-463b-498a-af81-983d0a8b3174" containerName="registry-server" Jan 30 22:10:51 crc kubenswrapper[4869]: I0130 22:10:51.321231 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqp4r/must-gather-j8pj4" Jan 30 22:10:51 crc kubenswrapper[4869]: I0130 22:10:51.323558 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bqp4r"/"default-dockercfg-px96m" Jan 30 22:10:51 crc kubenswrapper[4869]: I0130 22:10:51.324503 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bqp4r"/"openshift-service-ca.crt" Jan 30 22:10:51 crc kubenswrapper[4869]: I0130 22:10:51.329231 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bqp4r"/"kube-root-ca.crt" Jan 30 22:10:51 crc kubenswrapper[4869]: I0130 22:10:51.341909 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bqp4r/must-gather-j8pj4"] Jan 30 22:10:51 crc kubenswrapper[4869]: I0130 22:10:51.413628 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfxpv\" (UniqueName: \"kubernetes.io/projected/f488504a-98f8-43bf-9662-421de98bf15f-kube-api-access-jfxpv\") pod \"must-gather-j8pj4\" (UID: \"f488504a-98f8-43bf-9662-421de98bf15f\") " pod="openshift-must-gather-bqp4r/must-gather-j8pj4" Jan 30 22:10:51 crc kubenswrapper[4869]: I0130 22:10:51.413698 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f488504a-98f8-43bf-9662-421de98bf15f-must-gather-output\") pod \"must-gather-j8pj4\" (UID: \"f488504a-98f8-43bf-9662-421de98bf15f\") " pod="openshift-must-gather-bqp4r/must-gather-j8pj4" Jan 30 22:10:51 crc kubenswrapper[4869]: I0130 22:10:51.514555 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfxpv\" (UniqueName: \"kubernetes.io/projected/f488504a-98f8-43bf-9662-421de98bf15f-kube-api-access-jfxpv\") pod \"must-gather-j8pj4\" (UID: \"f488504a-98f8-43bf-9662-421de98bf15f\") " pod="openshift-must-gather-bqp4r/must-gather-j8pj4" Jan 30 22:10:51 crc kubenswrapper[4869]: I0130 22:10:51.514621 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f488504a-98f8-43bf-9662-421de98bf15f-must-gather-output\") pod \"must-gather-j8pj4\" (UID: \"f488504a-98f8-43bf-9662-421de98bf15f\") " pod="openshift-must-gather-bqp4r/must-gather-j8pj4" Jan 30 22:10:51 crc kubenswrapper[4869]: I0130 22:10:51.515228 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f488504a-98f8-43bf-9662-421de98bf15f-must-gather-output\") pod \"must-gather-j8pj4\" (UID: \"f488504a-98f8-43bf-9662-421de98bf15f\") " pod="openshift-must-gather-bqp4r/must-gather-j8pj4" Jan 30 22:10:51 crc kubenswrapper[4869]: I0130 22:10:51.534555 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfxpv\" (UniqueName: \"kubernetes.io/projected/f488504a-98f8-43bf-9662-421de98bf15f-kube-api-access-jfxpv\") pod \"must-gather-j8pj4\" (UID: \"f488504a-98f8-43bf-9662-421de98bf15f\") " pod="openshift-must-gather-bqp4r/must-gather-j8pj4" Jan 30 22:10:51 crc kubenswrapper[4869]: I0130 22:10:51.637529 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqp4r/must-gather-j8pj4" Jan 30 22:10:51 crc kubenswrapper[4869]: I0130 22:10:51.818121 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bqp4r/must-gather-j8pj4"] Jan 30 22:10:52 crc kubenswrapper[4869]: I0130 22:10:52.603511 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bqp4r/must-gather-j8pj4" event={"ID":"f488504a-98f8-43bf-9662-421de98bf15f","Type":"ContainerStarted","Data":"d5928bb2ee22dd017a4d324d364509cf2fc828df304251a4531b8218ddef7924"} Jan 30 22:10:52 crc kubenswrapper[4869]: I0130 22:10:52.603860 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bqp4r/must-gather-j8pj4" event={"ID":"f488504a-98f8-43bf-9662-421de98bf15f","Type":"ContainerStarted","Data":"41a2928eac81d08135621b39c847672c6736d873330dcc7ea518f0cd6fd2fddf"} Jan 30 22:10:52 crc kubenswrapper[4869]: I0130 22:10:52.603877 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bqp4r/must-gather-j8pj4" event={"ID":"f488504a-98f8-43bf-9662-421de98bf15f","Type":"ContainerStarted","Data":"9c5464c41b9dd566c9cd3efa7a469f552e5113ea4bdf7686b3cc3a0b16e506fa"} Jan 30 22:10:52 crc kubenswrapper[4869]: I0130 22:10:52.617186 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bqp4r/must-gather-j8pj4" podStartSLOduration=1.617167198 podStartE2EDuration="1.617167198s" podCreationTimestamp="2026-01-30 22:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:10:52.616326221 +0000 UTC m=+1653.502084246" watchObservedRunningTime="2026-01-30 22:10:52.617167198 +0000 UTC m=+1653.502925243" Jan 30 22:11:01 crc kubenswrapper[4869]: I0130 22:11:01.990429 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:11:01 crc kubenswrapper[4869]: I0130 22:11:01.991065 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:11:10 crc kubenswrapper[4869]: I0130 22:11:10.223792 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-77vfc"] Jan 30 22:11:10 crc kubenswrapper[4869]: I0130 22:11:10.225511 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-77vfc" Jan 30 22:11:10 crc kubenswrapper[4869]: I0130 22:11:10.235359 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-77vfc"] Jan 30 22:11:10 crc kubenswrapper[4869]: I0130 22:11:10.354723 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx82h\" (UniqueName: \"kubernetes.io/projected/9c641d0b-f1b6-4f02-8b72-820a4c0bd698-kube-api-access-qx82h\") pod \"redhat-operators-77vfc\" (UID: \"9c641d0b-f1b6-4f02-8b72-820a4c0bd698\") " pod="openshift-marketplace/redhat-operators-77vfc" Jan 30 22:11:10 crc kubenswrapper[4869]: I0130 22:11:10.354789 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c641d0b-f1b6-4f02-8b72-820a4c0bd698-utilities\") pod \"redhat-operators-77vfc\" (UID: \"9c641d0b-f1b6-4f02-8b72-820a4c0bd698\") " pod="openshift-marketplace/redhat-operators-77vfc" Jan 30 22:11:10 crc kubenswrapper[4869]: I0130 22:11:10.354827 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c641d0b-f1b6-4f02-8b72-820a4c0bd698-catalog-content\") pod \"redhat-operators-77vfc\" (UID: \"9c641d0b-f1b6-4f02-8b72-820a4c0bd698\") " pod="openshift-marketplace/redhat-operators-77vfc" Jan 30 22:11:10 crc kubenswrapper[4869]: I0130 22:11:10.456164 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx82h\" (UniqueName: \"kubernetes.io/projected/9c641d0b-f1b6-4f02-8b72-820a4c0bd698-kube-api-access-qx82h\") pod \"redhat-operators-77vfc\" (UID: \"9c641d0b-f1b6-4f02-8b72-820a4c0bd698\") " pod="openshift-marketplace/redhat-operators-77vfc" Jan 30 22:11:10 crc kubenswrapper[4869]: I0130 22:11:10.456226 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c641d0b-f1b6-4f02-8b72-820a4c0bd698-utilities\") pod \"redhat-operators-77vfc\" (UID: \"9c641d0b-f1b6-4f02-8b72-820a4c0bd698\") " pod="openshift-marketplace/redhat-operators-77vfc" Jan 30 22:11:10 crc kubenswrapper[4869]: I0130 22:11:10.456255 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c641d0b-f1b6-4f02-8b72-820a4c0bd698-catalog-content\") pod \"redhat-operators-77vfc\" (UID: \"9c641d0b-f1b6-4f02-8b72-820a4c0bd698\") " pod="openshift-marketplace/redhat-operators-77vfc" Jan 30 22:11:10 crc kubenswrapper[4869]: I0130 22:11:10.456757 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c641d0b-f1b6-4f02-8b72-820a4c0bd698-utilities\") pod \"redhat-operators-77vfc\" (UID: \"9c641d0b-f1b6-4f02-8b72-820a4c0bd698\") " pod="openshift-marketplace/redhat-operators-77vfc" Jan 30 22:11:10 crc kubenswrapper[4869]: I0130 22:11:10.456776 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c641d0b-f1b6-4f02-8b72-820a4c0bd698-catalog-content\") pod \"redhat-operators-77vfc\" (UID: \"9c641d0b-f1b6-4f02-8b72-820a4c0bd698\") " pod="openshift-marketplace/redhat-operators-77vfc" Jan 30 22:11:10 crc kubenswrapper[4869]: I0130 22:11:10.476330 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx82h\" (UniqueName: \"kubernetes.io/projected/9c641d0b-f1b6-4f02-8b72-820a4c0bd698-kube-api-access-qx82h\") pod \"redhat-operators-77vfc\" (UID: \"9c641d0b-f1b6-4f02-8b72-820a4c0bd698\") " pod="openshift-marketplace/redhat-operators-77vfc" Jan 30 22:11:10 crc kubenswrapper[4869]: I0130 22:11:10.557259 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-77vfc" Jan 30 22:11:10 crc kubenswrapper[4869]: I0130 22:11:10.805051 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-77vfc"] Jan 30 22:11:11 crc kubenswrapper[4869]: E0130 22:11:11.064668 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c641d0b_f1b6_4f02_8b72_820a4c0bd698.slice/crio-conmon-89925c36093c9d23572bfb11d7a036d286bc6f1b4029728640c925cd129ae265.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:11:11 crc kubenswrapper[4869]: I0130 22:11:11.707434 4869 generic.go:334] "Generic (PLEG): container finished" podID="9c641d0b-f1b6-4f02-8b72-820a4c0bd698" containerID="89925c36093c9d23572bfb11d7a036d286bc6f1b4029728640c925cd129ae265" exitCode=0 Jan 30 22:11:11 crc kubenswrapper[4869]: I0130 22:11:11.707552 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77vfc" event={"ID":"9c641d0b-f1b6-4f02-8b72-820a4c0bd698","Type":"ContainerDied","Data":"89925c36093c9d23572bfb11d7a036d286bc6f1b4029728640c925cd129ae265"} Jan 30 22:11:11 crc kubenswrapper[4869]: I0130 22:11:11.707781 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77vfc" event={"ID":"9c641d0b-f1b6-4f02-8b72-820a4c0bd698","Type":"ContainerStarted","Data":"94c8f6b78a0f0a5dea67d74e3b57869cef85152f109d28701ae7291ff12befcd"} Jan 30 22:11:13 crc kubenswrapper[4869]: I0130 22:11:13.721513 4869 generic.go:334] "Generic (PLEG): container finished" podID="9c641d0b-f1b6-4f02-8b72-820a4c0bd698" containerID="60b4a90a23402c80507d441cfae4699ad8986d926e0563231f78c68d33ec686b" exitCode=0 Jan 30 22:11:13 crc kubenswrapper[4869]: I0130 22:11:13.721611 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77vfc" event={"ID":"9c641d0b-f1b6-4f02-8b72-820a4c0bd698","Type":"ContainerDied","Data":"60b4a90a23402c80507d441cfae4699ad8986d926e0563231f78c68d33ec686b"} Jan 30 22:11:14 crc kubenswrapper[4869]: I0130 22:11:14.731495 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77vfc" event={"ID":"9c641d0b-f1b6-4f02-8b72-820a4c0bd698","Type":"ContainerStarted","Data":"adb24b4c65ceaafe29b261e8ad6de602c1bfb5f21006dc55f989ddeea57b7d9a"} Jan 30 22:11:14 crc kubenswrapper[4869]: I0130 22:11:14.756042 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-77vfc" podStartSLOduration=2.062611522 podStartE2EDuration="4.756024855s" podCreationTimestamp="2026-01-30 22:11:10 +0000 UTC" firstStartedPulling="2026-01-30 22:11:11.709677629 +0000 UTC m=+1672.595435654" lastFinishedPulling="2026-01-30 22:11:14.403090962 +0000 UTC m=+1675.288848987" observedRunningTime="2026-01-30 22:11:14.754372823 +0000 UTC m=+1675.640130858" watchObservedRunningTime="2026-01-30 22:11:14.756024855 +0000 UTC m=+1675.641782880" Jan 30 22:11:20 crc kubenswrapper[4869]: I0130 22:11:20.557351 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-77vfc" Jan 30 22:11:20 crc kubenswrapper[4869]: I0130 22:11:20.557680 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-77vfc" Jan 30 22:11:20 crc kubenswrapper[4869]: I0130 22:11:20.608356 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-77vfc" Jan 30 22:11:20 crc kubenswrapper[4869]: I0130 22:11:20.802252 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-77vfc" Jan 30 22:11:20 crc kubenswrapper[4869]: I0130 22:11:20.854836 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-77vfc"] Jan 30 22:11:22 crc kubenswrapper[4869]: I0130 22:11:22.778010 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-77vfc" podUID="9c641d0b-f1b6-4f02-8b72-820a4c0bd698" containerName="registry-server" containerID="cri-o://adb24b4c65ceaafe29b261e8ad6de602c1bfb5f21006dc55f989ddeea57b7d9a" gracePeriod=2 Jan 30 22:11:22 crc kubenswrapper[4869]: I0130 22:11:22.900356 4869 scope.go:117] "RemoveContainer" containerID="e54fe280205f95b79153a7f66e919e059f74ff96307ba80b4f0a9078c46ac33c" Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.730689 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-77vfc" Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.799681 4869 generic.go:334] "Generic (PLEG): container finished" podID="9c641d0b-f1b6-4f02-8b72-820a4c0bd698" containerID="adb24b4c65ceaafe29b261e8ad6de602c1bfb5f21006dc55f989ddeea57b7d9a" exitCode=0 Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.799742 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77vfc" event={"ID":"9c641d0b-f1b6-4f02-8b72-820a4c0bd698","Type":"ContainerDied","Data":"adb24b4c65ceaafe29b261e8ad6de602c1bfb5f21006dc55f989ddeea57b7d9a"} Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.799787 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77vfc" event={"ID":"9c641d0b-f1b6-4f02-8b72-820a4c0bd698","Type":"ContainerDied","Data":"94c8f6b78a0f0a5dea67d74e3b57869cef85152f109d28701ae7291ff12befcd"} Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.799812 4869 scope.go:117] "RemoveContainer" containerID="adb24b4c65ceaafe29b261e8ad6de602c1bfb5f21006dc55f989ddeea57b7d9a" Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.799805 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-77vfc" Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.817644 4869 scope.go:117] "RemoveContainer" containerID="60b4a90a23402c80507d441cfae4699ad8986d926e0563231f78c68d33ec686b" Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.834340 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx82h\" (UniqueName: \"kubernetes.io/projected/9c641d0b-f1b6-4f02-8b72-820a4c0bd698-kube-api-access-qx82h\") pod \"9c641d0b-f1b6-4f02-8b72-820a4c0bd698\" (UID: \"9c641d0b-f1b6-4f02-8b72-820a4c0bd698\") " Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.834964 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c641d0b-f1b6-4f02-8b72-820a4c0bd698-utilities\") pod \"9c641d0b-f1b6-4f02-8b72-820a4c0bd698\" (UID: \"9c641d0b-f1b6-4f02-8b72-820a4c0bd698\") " Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.835257 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c641d0b-f1b6-4f02-8b72-820a4c0bd698-catalog-content\") pod \"9c641d0b-f1b6-4f02-8b72-820a4c0bd698\" (UID: \"9c641d0b-f1b6-4f02-8b72-820a4c0bd698\") " Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.835297 4869 scope.go:117] "RemoveContainer" containerID="89925c36093c9d23572bfb11d7a036d286bc6f1b4029728640c925cd129ae265" Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.835982 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c641d0b-f1b6-4f02-8b72-820a4c0bd698-utilities" (OuterVolumeSpecName: "utilities") pod "9c641d0b-f1b6-4f02-8b72-820a4c0bd698" (UID: "9c641d0b-f1b6-4f02-8b72-820a4c0bd698"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.841053 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c641d0b-f1b6-4f02-8b72-820a4c0bd698-kube-api-access-qx82h" (OuterVolumeSpecName: "kube-api-access-qx82h") pod "9c641d0b-f1b6-4f02-8b72-820a4c0bd698" (UID: "9c641d0b-f1b6-4f02-8b72-820a4c0bd698"). InnerVolumeSpecName "kube-api-access-qx82h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.874473 4869 scope.go:117] "RemoveContainer" containerID="adb24b4c65ceaafe29b261e8ad6de602c1bfb5f21006dc55f989ddeea57b7d9a" Jan 30 22:11:24 crc kubenswrapper[4869]: E0130 22:11:24.875277 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adb24b4c65ceaafe29b261e8ad6de602c1bfb5f21006dc55f989ddeea57b7d9a\": container with ID starting with adb24b4c65ceaafe29b261e8ad6de602c1bfb5f21006dc55f989ddeea57b7d9a not found: ID does not exist" containerID="adb24b4c65ceaafe29b261e8ad6de602c1bfb5f21006dc55f989ddeea57b7d9a" Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.875361 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb24b4c65ceaafe29b261e8ad6de602c1bfb5f21006dc55f989ddeea57b7d9a"} err="failed to get container status \"adb24b4c65ceaafe29b261e8ad6de602c1bfb5f21006dc55f989ddeea57b7d9a\": rpc error: code = NotFound desc = could not find container \"adb24b4c65ceaafe29b261e8ad6de602c1bfb5f21006dc55f989ddeea57b7d9a\": container with ID starting with adb24b4c65ceaafe29b261e8ad6de602c1bfb5f21006dc55f989ddeea57b7d9a not found: ID does not exist" Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.875409 4869 scope.go:117] "RemoveContainer" containerID="60b4a90a23402c80507d441cfae4699ad8986d926e0563231f78c68d33ec686b" Jan 30 22:11:24 crc kubenswrapper[4869]: E0130 22:11:24.876007 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60b4a90a23402c80507d441cfae4699ad8986d926e0563231f78c68d33ec686b\": container with ID starting with 60b4a90a23402c80507d441cfae4699ad8986d926e0563231f78c68d33ec686b not found: ID does not exist" containerID="60b4a90a23402c80507d441cfae4699ad8986d926e0563231f78c68d33ec686b" Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.876056 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60b4a90a23402c80507d441cfae4699ad8986d926e0563231f78c68d33ec686b"} err="failed to get container status \"60b4a90a23402c80507d441cfae4699ad8986d926e0563231f78c68d33ec686b\": rpc error: code = NotFound desc = could not find container \"60b4a90a23402c80507d441cfae4699ad8986d926e0563231f78c68d33ec686b\": container with ID starting with 60b4a90a23402c80507d441cfae4699ad8986d926e0563231f78c68d33ec686b not found: ID does not exist" Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.876081 4869 scope.go:117] "RemoveContainer" containerID="89925c36093c9d23572bfb11d7a036d286bc6f1b4029728640c925cd129ae265" Jan 30 22:11:24 crc kubenswrapper[4869]: E0130 22:11:24.876480 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89925c36093c9d23572bfb11d7a036d286bc6f1b4029728640c925cd129ae265\": container with ID starting with 89925c36093c9d23572bfb11d7a036d286bc6f1b4029728640c925cd129ae265 not found: ID does not exist" containerID="89925c36093c9d23572bfb11d7a036d286bc6f1b4029728640c925cd129ae265" Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.876526 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89925c36093c9d23572bfb11d7a036d286bc6f1b4029728640c925cd129ae265"} err="failed to get container status \"89925c36093c9d23572bfb11d7a036d286bc6f1b4029728640c925cd129ae265\": rpc error: code = NotFound desc = could not find container \"89925c36093c9d23572bfb11d7a036d286bc6f1b4029728640c925cd129ae265\": container with ID starting with 89925c36093c9d23572bfb11d7a036d286bc6f1b4029728640c925cd129ae265 not found: ID does not exist" Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.937476 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c641d0b-f1b6-4f02-8b72-820a4c0bd698-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.937521 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx82h\" (UniqueName: \"kubernetes.io/projected/9c641d0b-f1b6-4f02-8b72-820a4c0bd698-kube-api-access-qx82h\") on node \"crc\" DevicePath \"\"" Jan 30 22:11:24 crc kubenswrapper[4869]: I0130 22:11:24.951007 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c641d0b-f1b6-4f02-8b72-820a4c0bd698-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c641d0b-f1b6-4f02-8b72-820a4c0bd698" (UID: "9c641d0b-f1b6-4f02-8b72-820a4c0bd698"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:11:25 crc kubenswrapper[4869]: I0130 22:11:25.039531 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c641d0b-f1b6-4f02-8b72-820a4c0bd698-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:11:25 crc kubenswrapper[4869]: I0130 22:11:25.128991 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-77vfc"] Jan 30 22:11:25 crc kubenswrapper[4869]: I0130 22:11:25.135834 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-77vfc"] Jan 30 22:11:25 crc kubenswrapper[4869]: I0130 22:11:25.885039 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c641d0b-f1b6-4f02-8b72-820a4c0bd698" path="/var/lib/kubelet/pods/9c641d0b-f1b6-4f02-8b72-820a4c0bd698/volumes" Jan 30 22:11:31 crc kubenswrapper[4869]: I0130 22:11:31.990773 4869 patch_prober.go:28] interesting pod/machine-config-daemon-vzgdv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:11:31 crc kubenswrapper[4869]: I0130 22:11:31.992140 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:11:31 crc kubenswrapper[4869]: I0130 22:11:31.992261 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" Jan 30 22:11:31 crc kubenswrapper[4869]: I0130 22:11:31.992966 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f"} pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:11:31 crc kubenswrapper[4869]: I0130 22:11:31.993113 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerName="machine-config-daemon" containerID="cri-o://37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" gracePeriod=600 Jan 30 22:11:32 crc kubenswrapper[4869]: E0130 22:11:32.125651 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:11:32 crc kubenswrapper[4869]: I0130 22:11:32.847359 4869 generic.go:334] "Generic (PLEG): container finished" podID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" exitCode=0 Jan 30 22:11:32 crc kubenswrapper[4869]: I0130 22:11:32.847387 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" event={"ID":"b6fc0664-5e80-440d-a6e8-4189cdf5c500","Type":"ContainerDied","Data":"37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f"} Jan 30 22:11:32 crc kubenswrapper[4869]: I0130 22:11:32.847448 4869 scope.go:117] "RemoveContainer" containerID="7ffcbacd9fdeb4349443b9f613e5c78dd198a3778ae8b5b04d896ffb86351bb7" Jan 30 22:11:32 crc kubenswrapper[4869]: I0130 22:11:32.848205 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:11:32 crc kubenswrapper[4869]: E0130 22:11:32.848726 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:11:40 crc kubenswrapper[4869]: I0130 22:11:40.262422 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-6bxlw_492fddc8-5b29-4b32-9b4c-9831317fae23/control-plane-machine-set-operator/0.log" Jan 30 22:11:40 crc kubenswrapper[4869]: I0130 22:11:40.419607 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-npnfg_c1f0b262-4d72-49a2-aa45-918fbc89a9f2/kube-rbac-proxy/0.log" Jan 30 22:11:40 crc kubenswrapper[4869]: I0130 22:11:40.448171 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-npnfg_c1f0b262-4d72-49a2-aa45-918fbc89a9f2/machine-api-operator/0.log" Jan 30 22:11:44 crc kubenswrapper[4869]: I0130 22:11:44.876760 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:11:44 crc kubenswrapper[4869]: E0130 22:11:44.877458 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:11:58 crc kubenswrapper[4869]: I0130 22:11:58.877797 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:11:58 crc kubenswrapper[4869]: E0130 22:11:58.878439 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:12:08 crc kubenswrapper[4869]: I0130 22:12:08.699657 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-kjnqq_48ddd270-2e1a-4924-905c-89327f9fd1f4/kube-rbac-proxy/0.log" Jan 30 22:12:08 crc kubenswrapper[4869]: I0130 22:12:08.740303 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-kjnqq_48ddd270-2e1a-4924-905c-89327f9fd1f4/controller/0.log" Jan 30 22:12:08 crc kubenswrapper[4869]: I0130 22:12:08.912066 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-frr-files/0.log" Jan 30 22:12:09 crc kubenswrapper[4869]: I0130 22:12:09.070765 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-reloader/0.log" Jan 30 22:12:09 crc kubenswrapper[4869]: I0130 22:12:09.099161 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-reloader/0.log" Jan 30 22:12:09 crc kubenswrapper[4869]: I0130 22:12:09.122240 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-metrics/0.log" Jan 30 22:12:09 crc kubenswrapper[4869]: I0130 22:12:09.127709 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-frr-files/0.log" Jan 30 22:12:09 crc kubenswrapper[4869]: I0130 22:12:09.307738 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-frr-files/0.log" Jan 30 22:12:09 crc kubenswrapper[4869]: I0130 22:12:09.326600 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-reloader/0.log" Jan 30 22:12:09 crc kubenswrapper[4869]: I0130 22:12:09.336710 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-metrics/0.log" Jan 30 22:12:09 crc kubenswrapper[4869]: I0130 22:12:09.369597 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-metrics/0.log" Jan 30 22:12:09 crc kubenswrapper[4869]: I0130 22:12:09.508936 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-reloader/0.log" Jan 30 22:12:09 crc kubenswrapper[4869]: I0130 22:12:09.541660 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/controller/0.log" Jan 30 22:12:09 crc kubenswrapper[4869]: I0130 22:12:09.546418 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-metrics/0.log" Jan 30 22:12:09 crc kubenswrapper[4869]: I0130 22:12:09.564647 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/cp-frr-files/0.log" Jan 30 22:12:09 crc kubenswrapper[4869]: I0130 22:12:09.755588 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/kube-rbac-proxy/0.log" Jan 30 22:12:09 crc kubenswrapper[4869]: I0130 22:12:09.758270 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/frr-metrics/0.log" Jan 30 22:12:09 crc kubenswrapper[4869]: I0130 22:12:09.812365 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/kube-rbac-proxy-frr/0.log" Jan 30 22:12:10 crc kubenswrapper[4869]: I0130 22:12:10.034734 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/reloader/0.log" Jan 30 22:12:10 crc kubenswrapper[4869]: I0130 22:12:10.041149 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-pdqbz_8df09342-d1b4-46c8-b073-756f9c26e15b/frr-k8s-webhook-server/0.log" Jan 30 22:12:10 crc kubenswrapper[4869]: I0130 22:12:10.149122 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrwjb_0d120a71-eb76-4e21-bd06-a646961dbebc/frr/0.log" Jan 30 22:12:10 crc kubenswrapper[4869]: I0130 22:12:10.233376 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-c784b4f9f-nctpd_74d47ca3-77d5-40b2-bf78-e6434e094b98/manager/0.log" Jan 30 22:12:10 crc kubenswrapper[4869]: I0130 22:12:10.360431 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7866d54458-pq5sd_b8abd45b-148e-4550-9f42-7ebb36bc52a3/webhook-server/0.log" Jan 30 22:12:10 crc kubenswrapper[4869]: I0130 22:12:10.403054 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2fslp_bb7f0287-c0d2-4a75-b392-5d143f6a9eb6/kube-rbac-proxy/0.log" Jan 30 22:12:10 crc kubenswrapper[4869]: I0130 22:12:10.579797 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2fslp_bb7f0287-c0d2-4a75-b392-5d143f6a9eb6/speaker/0.log" Jan 30 22:12:13 crc kubenswrapper[4869]: I0130 22:12:13.876549 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:12:13 crc kubenswrapper[4869]: E0130 22:12:13.877353 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:12:28 crc kubenswrapper[4869]: I0130 22:12:28.877204 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:12:28 crc kubenswrapper[4869]: E0130 22:12:28.878518 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:12:33 crc kubenswrapper[4869]: I0130 22:12:33.868179 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244_565c0cfa-b127-4da5-a3be-d660b5224997/util/0.log" Jan 30 22:12:34 crc kubenswrapper[4869]: I0130 22:12:34.052260 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244_565c0cfa-b127-4da5-a3be-d660b5224997/pull/0.log" Jan 30 22:12:34 crc kubenswrapper[4869]: I0130 22:12:34.092873 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244_565c0cfa-b127-4da5-a3be-d660b5224997/util/0.log" Jan 30 22:12:34 crc kubenswrapper[4869]: I0130 22:12:34.103577 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244_565c0cfa-b127-4da5-a3be-d660b5224997/pull/0.log" Jan 30 22:12:34 crc kubenswrapper[4869]: I0130 22:12:34.267063 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244_565c0cfa-b127-4da5-a3be-d660b5224997/util/0.log" Jan 30 22:12:34 crc kubenswrapper[4869]: I0130 22:12:34.281790 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244_565c0cfa-b127-4da5-a3be-d660b5224997/extract/0.log" Jan 30 22:12:34 crc kubenswrapper[4869]: I0130 22:12:34.305713 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5q244_565c0cfa-b127-4da5-a3be-d660b5224997/pull/0.log" Jan 30 22:12:34 crc kubenswrapper[4869]: I0130 22:12:34.438745 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-djhl2_9f702f9e-18d7-4559-8745-8b691886d766/extract-utilities/0.log" Jan 30 22:12:34 crc kubenswrapper[4869]: I0130 22:12:34.602116 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-djhl2_9f702f9e-18d7-4559-8745-8b691886d766/extract-content/0.log" Jan 30 22:12:34 crc kubenswrapper[4869]: I0130 22:12:34.602144 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-djhl2_9f702f9e-18d7-4559-8745-8b691886d766/extract-utilities/0.log" Jan 30 22:12:34 crc kubenswrapper[4869]: I0130 22:12:34.627024 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-djhl2_9f702f9e-18d7-4559-8745-8b691886d766/extract-content/0.log" Jan 30 22:12:34 crc kubenswrapper[4869]: I0130 22:12:34.782449 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-djhl2_9f702f9e-18d7-4559-8745-8b691886d766/extract-content/0.log" Jan 30 22:12:34 crc kubenswrapper[4869]: I0130 22:12:34.799484 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-djhl2_9f702f9e-18d7-4559-8745-8b691886d766/extract-utilities/0.log" Jan 30 22:12:34 crc kubenswrapper[4869]: I0130 22:12:34.982272 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxw2l_2326e182-60d7-4af7-8845-8e688d90b0a1/extract-utilities/0.log" Jan 30 22:12:35 crc kubenswrapper[4869]: I0130 22:12:35.221427 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-djhl2_9f702f9e-18d7-4559-8745-8b691886d766/registry-server/0.log" Jan 30 22:12:35 crc kubenswrapper[4869]: I0130 22:12:35.245798 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxw2l_2326e182-60d7-4af7-8845-8e688d90b0a1/extract-utilities/0.log" Jan 30 22:12:35 crc kubenswrapper[4869]: I0130 22:12:35.265339 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxw2l_2326e182-60d7-4af7-8845-8e688d90b0a1/extract-content/0.log" Jan 30 22:12:35 crc kubenswrapper[4869]: I0130 22:12:35.279615 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxw2l_2326e182-60d7-4af7-8845-8e688d90b0a1/extract-content/0.log" Jan 30 22:12:35 crc kubenswrapper[4869]: I0130 22:12:35.401108 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxw2l_2326e182-60d7-4af7-8845-8e688d90b0a1/extract-utilities/0.log" Jan 30 22:12:35 crc kubenswrapper[4869]: I0130 22:12:35.446214 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxw2l_2326e182-60d7-4af7-8845-8e688d90b0a1/extract-content/0.log" Jan 30 22:12:35 crc kubenswrapper[4869]: I0130 22:12:35.597685 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-55fhb_059ebbdc-d9b5-4a32-a167-30dfeae746ff/marketplace-operator/0.log" Jan 30 22:12:35 crc kubenswrapper[4869]: I0130 22:12:35.746946 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h9l9x_dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae/extract-utilities/0.log" Jan 30 22:12:35 crc kubenswrapper[4869]: I0130 22:12:35.833479 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxw2l_2326e182-60d7-4af7-8845-8e688d90b0a1/registry-server/0.log" Jan 30 22:12:35 crc kubenswrapper[4869]: I0130 22:12:35.894631 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h9l9x_dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae/extract-content/0.log" Jan 30 22:12:35 crc kubenswrapper[4869]: I0130 22:12:35.933248 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h9l9x_dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae/extract-utilities/0.log" Jan 30 22:12:35 crc kubenswrapper[4869]: I0130 22:12:35.976315 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h9l9x_dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae/extract-content/0.log" Jan 30 22:12:36 crc kubenswrapper[4869]: I0130 22:12:36.141531 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h9l9x_dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae/extract-utilities/0.log" Jan 30 22:12:36 crc kubenswrapper[4869]: I0130 22:12:36.162010 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h9l9x_dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae/extract-content/0.log" Jan 30 22:12:36 crc kubenswrapper[4869]: I0130 22:12:36.199890 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h9l9x_dd5e41fc-1989-4b7d-b1ca-195a9ce9b0ae/registry-server/0.log" Jan 30 22:12:36 crc kubenswrapper[4869]: I0130 22:12:36.324132 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9n2gh_6abcba63-fc26-470d-b5bb-1a9e084cb65f/extract-utilities/0.log" Jan 30 22:12:36 crc kubenswrapper[4869]: I0130 22:12:36.503544 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9n2gh_6abcba63-fc26-470d-b5bb-1a9e084cb65f/extract-content/0.log" Jan 30 22:12:36 crc kubenswrapper[4869]: I0130 22:12:36.521255 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9n2gh_6abcba63-fc26-470d-b5bb-1a9e084cb65f/extract-utilities/0.log" Jan 30 22:12:36 crc kubenswrapper[4869]: I0130 22:12:36.526374 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9n2gh_6abcba63-fc26-470d-b5bb-1a9e084cb65f/extract-content/0.log" Jan 30 22:12:36 crc kubenswrapper[4869]: I0130 22:12:36.697731 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9n2gh_6abcba63-fc26-470d-b5bb-1a9e084cb65f/extract-content/0.log" Jan 30 22:12:36 crc kubenswrapper[4869]: I0130 22:12:36.727924 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9n2gh_6abcba63-fc26-470d-b5bb-1a9e084cb65f/extract-utilities/0.log" Jan 30 22:12:37 crc kubenswrapper[4869]: I0130 22:12:37.038244 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9n2gh_6abcba63-fc26-470d-b5bb-1a9e084cb65f/registry-server/0.log" Jan 30 22:12:42 crc kubenswrapper[4869]: I0130 22:12:42.876855 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:12:42 crc kubenswrapper[4869]: E0130 22:12:42.877827 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:12:53 crc kubenswrapper[4869]: I0130 22:12:53.877320 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:12:53 crc kubenswrapper[4869]: E0130 22:12:53.878137 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:13:07 crc kubenswrapper[4869]: I0130 22:13:07.876409 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:13:07 crc kubenswrapper[4869]: E0130 22:13:07.877168 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:13:21 crc kubenswrapper[4869]: I0130 22:13:21.876755 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:13:21 crc kubenswrapper[4869]: E0130 22:13:21.877519 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:13:36 crc kubenswrapper[4869]: I0130 22:13:36.878094 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:13:36 crc kubenswrapper[4869]: E0130 22:13:36.878843 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:13:50 crc kubenswrapper[4869]: I0130 22:13:50.876953 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:13:50 crc kubenswrapper[4869]: E0130 22:13:50.878668 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:14:01 crc kubenswrapper[4869]: I0130 22:14:01.877530 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:14:01 crc kubenswrapper[4869]: E0130 22:14:01.878362 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:14:02 crc kubenswrapper[4869]: I0130 22:14:02.688202 4869 generic.go:334] "Generic (PLEG): container finished" podID="f488504a-98f8-43bf-9662-421de98bf15f" containerID="41a2928eac81d08135621b39c847672c6736d873330dcc7ea518f0cd6fd2fddf" exitCode=0 Jan 30 22:14:02 crc kubenswrapper[4869]: I0130 22:14:02.688250 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bqp4r/must-gather-j8pj4" event={"ID":"f488504a-98f8-43bf-9662-421de98bf15f","Type":"ContainerDied","Data":"41a2928eac81d08135621b39c847672c6736d873330dcc7ea518f0cd6fd2fddf"} Jan 30 22:14:02 crc kubenswrapper[4869]: I0130 22:14:02.688775 4869 scope.go:117] "RemoveContainer" containerID="41a2928eac81d08135621b39c847672c6736d873330dcc7ea518f0cd6fd2fddf" Jan 30 22:14:02 crc kubenswrapper[4869]: I0130 22:14:02.862511 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bqp4r_must-gather-j8pj4_f488504a-98f8-43bf-9662-421de98bf15f/gather/0.log" Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.145992 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bqp4r/must-gather-j8pj4"] Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.149382 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bqp4r/must-gather-j8pj4" podUID="f488504a-98f8-43bf-9662-421de98bf15f" containerName="copy" containerID="cri-o://d5928bb2ee22dd017a4d324d364509cf2fc828df304251a4531b8218ddef7924" gracePeriod=2 Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.152125 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bqp4r/must-gather-j8pj4"] Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.486473 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bqp4r_must-gather-j8pj4_f488504a-98f8-43bf-9662-421de98bf15f/copy/0.log" Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.488026 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqp4r/must-gather-j8pj4" Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.500974 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfxpv\" (UniqueName: \"kubernetes.io/projected/f488504a-98f8-43bf-9662-421de98bf15f-kube-api-access-jfxpv\") pod \"f488504a-98f8-43bf-9662-421de98bf15f\" (UID: \"f488504a-98f8-43bf-9662-421de98bf15f\") " Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.501129 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f488504a-98f8-43bf-9662-421de98bf15f-must-gather-output\") pod \"f488504a-98f8-43bf-9662-421de98bf15f\" (UID: \"f488504a-98f8-43bf-9662-421de98bf15f\") " Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.513849 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f488504a-98f8-43bf-9662-421de98bf15f-kube-api-access-jfxpv" (OuterVolumeSpecName: "kube-api-access-jfxpv") pod "f488504a-98f8-43bf-9662-421de98bf15f" (UID: "f488504a-98f8-43bf-9662-421de98bf15f"). InnerVolumeSpecName "kube-api-access-jfxpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.581087 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f488504a-98f8-43bf-9662-421de98bf15f-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f488504a-98f8-43bf-9662-421de98bf15f" (UID: "f488504a-98f8-43bf-9662-421de98bf15f"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.602669 4869 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f488504a-98f8-43bf-9662-421de98bf15f-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.602705 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfxpv\" (UniqueName: \"kubernetes.io/projected/f488504a-98f8-43bf-9662-421de98bf15f-kube-api-access-jfxpv\") on node \"crc\" DevicePath \"\"" Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.763169 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bqp4r_must-gather-j8pj4_f488504a-98f8-43bf-9662-421de98bf15f/copy/0.log" Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.763522 4869 generic.go:334] "Generic (PLEG): container finished" podID="f488504a-98f8-43bf-9662-421de98bf15f" containerID="d5928bb2ee22dd017a4d324d364509cf2fc828df304251a4531b8218ddef7924" exitCode=143 Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.763567 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqp4r/must-gather-j8pj4" Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.763604 4869 scope.go:117] "RemoveContainer" containerID="d5928bb2ee22dd017a4d324d364509cf2fc828df304251a4531b8218ddef7924" Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.781005 4869 scope.go:117] "RemoveContainer" containerID="41a2928eac81d08135621b39c847672c6736d873330dcc7ea518f0cd6fd2fddf" Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.815576 4869 scope.go:117] "RemoveContainer" containerID="d5928bb2ee22dd017a4d324d364509cf2fc828df304251a4531b8218ddef7924" Jan 30 22:14:13 crc kubenswrapper[4869]: E0130 22:14:13.816087 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5928bb2ee22dd017a4d324d364509cf2fc828df304251a4531b8218ddef7924\": container with ID starting with d5928bb2ee22dd017a4d324d364509cf2fc828df304251a4531b8218ddef7924 not found: ID does not exist" containerID="d5928bb2ee22dd017a4d324d364509cf2fc828df304251a4531b8218ddef7924" Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.816146 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5928bb2ee22dd017a4d324d364509cf2fc828df304251a4531b8218ddef7924"} err="failed to get container status \"d5928bb2ee22dd017a4d324d364509cf2fc828df304251a4531b8218ddef7924\": rpc error: code = NotFound desc = could not find container \"d5928bb2ee22dd017a4d324d364509cf2fc828df304251a4531b8218ddef7924\": container with ID starting with d5928bb2ee22dd017a4d324d364509cf2fc828df304251a4531b8218ddef7924 not found: ID does not exist" Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.816180 4869 scope.go:117] "RemoveContainer" containerID="41a2928eac81d08135621b39c847672c6736d873330dcc7ea518f0cd6fd2fddf" Jan 30 22:14:13 crc kubenswrapper[4869]: E0130 22:14:13.816529 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41a2928eac81d08135621b39c847672c6736d873330dcc7ea518f0cd6fd2fddf\": container with ID starting with 41a2928eac81d08135621b39c847672c6736d873330dcc7ea518f0cd6fd2fddf not found: ID does not exist" containerID="41a2928eac81d08135621b39c847672c6736d873330dcc7ea518f0cd6fd2fddf" Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.816552 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a2928eac81d08135621b39c847672c6736d873330dcc7ea518f0cd6fd2fddf"} err="failed to get container status \"41a2928eac81d08135621b39c847672c6736d873330dcc7ea518f0cd6fd2fddf\": rpc error: code = NotFound desc = could not find container \"41a2928eac81d08135621b39c847672c6736d873330dcc7ea518f0cd6fd2fddf\": container with ID starting with 41a2928eac81d08135621b39c847672c6736d873330dcc7ea518f0cd6fd2fddf not found: ID does not exist" Jan 30 22:14:13 crc kubenswrapper[4869]: I0130 22:14:13.884666 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f488504a-98f8-43bf-9662-421de98bf15f" path="/var/lib/kubelet/pods/f488504a-98f8-43bf-9662-421de98bf15f/volumes" Jan 30 22:14:14 crc kubenswrapper[4869]: I0130 22:14:14.877094 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:14:14 crc kubenswrapper[4869]: E0130 22:14:14.877538 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:14:26 crc kubenswrapper[4869]: I0130 22:14:26.877144 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:14:26 crc kubenswrapper[4869]: E0130 22:14:26.878708 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:14:39 crc kubenswrapper[4869]: I0130 22:14:39.881467 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:14:39 crc kubenswrapper[4869]: E0130 22:14:39.882616 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:14:53 crc kubenswrapper[4869]: I0130 22:14:53.876346 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:14:53 crc kubenswrapper[4869]: E0130 22:14:53.877288 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.135845 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97"] Jan 30 22:15:00 crc kubenswrapper[4869]: E0130 22:15:00.136785 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f488504a-98f8-43bf-9662-421de98bf15f" containerName="gather" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.136802 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f488504a-98f8-43bf-9662-421de98bf15f" containerName="gather" Jan 30 22:15:00 crc kubenswrapper[4869]: E0130 22:15:00.136820 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f488504a-98f8-43bf-9662-421de98bf15f" containerName="copy" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.136827 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f488504a-98f8-43bf-9662-421de98bf15f" containerName="copy" Jan 30 22:15:00 crc kubenswrapper[4869]: E0130 22:15:00.136848 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c641d0b-f1b6-4f02-8b72-820a4c0bd698" containerName="extract-content" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.136855 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c641d0b-f1b6-4f02-8b72-820a4c0bd698" containerName="extract-content" Jan 30 22:15:00 crc kubenswrapper[4869]: E0130 22:15:00.136866 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c641d0b-f1b6-4f02-8b72-820a4c0bd698" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.136911 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c641d0b-f1b6-4f02-8b72-820a4c0bd698" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4869]: E0130 22:15:00.137007 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c641d0b-f1b6-4f02-8b72-820a4c0bd698" containerName="extract-utilities" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.137019 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c641d0b-f1b6-4f02-8b72-820a4c0bd698" containerName="extract-utilities" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.137147 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f488504a-98f8-43bf-9662-421de98bf15f" containerName="gather" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.137171 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c641d0b-f1b6-4f02-8b72-820a4c0bd698" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.137185 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f488504a-98f8-43bf-9662-421de98bf15f" containerName="copy" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.137648 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.139752 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.139767 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.147171 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97"] Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.180345 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e02e25d-201c-4dc3-ba08-4308412ae8fd-secret-volume\") pod \"collect-profiles-29496855-v6z97\" (UID: \"2e02e25d-201c-4dc3-ba08-4308412ae8fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.180424 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e02e25d-201c-4dc3-ba08-4308412ae8fd-config-volume\") pod \"collect-profiles-29496855-v6z97\" (UID: \"2e02e25d-201c-4dc3-ba08-4308412ae8fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.180688 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9hpc\" (UniqueName: \"kubernetes.io/projected/2e02e25d-201c-4dc3-ba08-4308412ae8fd-kube-api-access-b9hpc\") pod \"collect-profiles-29496855-v6z97\" (UID: \"2e02e25d-201c-4dc3-ba08-4308412ae8fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.281600 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e02e25d-201c-4dc3-ba08-4308412ae8fd-secret-volume\") pod \"collect-profiles-29496855-v6z97\" (UID: \"2e02e25d-201c-4dc3-ba08-4308412ae8fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.281658 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e02e25d-201c-4dc3-ba08-4308412ae8fd-config-volume\") pod \"collect-profiles-29496855-v6z97\" (UID: \"2e02e25d-201c-4dc3-ba08-4308412ae8fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.281705 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9hpc\" (UniqueName: \"kubernetes.io/projected/2e02e25d-201c-4dc3-ba08-4308412ae8fd-kube-api-access-b9hpc\") pod \"collect-profiles-29496855-v6z97\" (UID: \"2e02e25d-201c-4dc3-ba08-4308412ae8fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.283353 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e02e25d-201c-4dc3-ba08-4308412ae8fd-config-volume\") pod \"collect-profiles-29496855-v6z97\" (UID: \"2e02e25d-201c-4dc3-ba08-4308412ae8fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.288861 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e02e25d-201c-4dc3-ba08-4308412ae8fd-secret-volume\") pod \"collect-profiles-29496855-v6z97\" (UID: \"2e02e25d-201c-4dc3-ba08-4308412ae8fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.298179 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9hpc\" (UniqueName: \"kubernetes.io/projected/2e02e25d-201c-4dc3-ba08-4308412ae8fd-kube-api-access-b9hpc\") pod \"collect-profiles-29496855-v6z97\" (UID: \"2e02e25d-201c-4dc3-ba08-4308412ae8fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.457337 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97" Jan 30 22:15:00 crc kubenswrapper[4869]: I0130 22:15:00.649041 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97"] Jan 30 22:15:01 crc kubenswrapper[4869]: I0130 22:15:01.021967 4869 generic.go:334] "Generic (PLEG): container finished" podID="2e02e25d-201c-4dc3-ba08-4308412ae8fd" containerID="f26a70d0f77e130824049455d045a6c87a7bed55323a1593280f77dc1e02a8ca" exitCode=0 Jan 30 22:15:01 crc kubenswrapper[4869]: I0130 22:15:01.022069 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97" event={"ID":"2e02e25d-201c-4dc3-ba08-4308412ae8fd","Type":"ContainerDied","Data":"f26a70d0f77e130824049455d045a6c87a7bed55323a1593280f77dc1e02a8ca"} Jan 30 22:15:01 crc kubenswrapper[4869]: I0130 22:15:01.022265 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97" event={"ID":"2e02e25d-201c-4dc3-ba08-4308412ae8fd","Type":"ContainerStarted","Data":"cdc589f6ab9674655b1712c8f6448329152a2620b69ba473c8f9b4f60dcd3a87"} Jan 30 22:15:02 crc kubenswrapper[4869]: I0130 22:15:02.247108 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97" Jan 30 22:15:02 crc kubenswrapper[4869]: I0130 22:15:02.408876 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e02e25d-201c-4dc3-ba08-4308412ae8fd-secret-volume\") pod \"2e02e25d-201c-4dc3-ba08-4308412ae8fd\" (UID: \"2e02e25d-201c-4dc3-ba08-4308412ae8fd\") " Jan 30 22:15:02 crc kubenswrapper[4869]: I0130 22:15:02.408994 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9hpc\" (UniqueName: \"kubernetes.io/projected/2e02e25d-201c-4dc3-ba08-4308412ae8fd-kube-api-access-b9hpc\") pod \"2e02e25d-201c-4dc3-ba08-4308412ae8fd\" (UID: \"2e02e25d-201c-4dc3-ba08-4308412ae8fd\") " Jan 30 22:15:02 crc kubenswrapper[4869]: I0130 22:15:02.409089 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e02e25d-201c-4dc3-ba08-4308412ae8fd-config-volume\") pod \"2e02e25d-201c-4dc3-ba08-4308412ae8fd\" (UID: \"2e02e25d-201c-4dc3-ba08-4308412ae8fd\") " Jan 30 22:15:02 crc kubenswrapper[4869]: I0130 22:15:02.410061 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e02e25d-201c-4dc3-ba08-4308412ae8fd-config-volume" (OuterVolumeSpecName: "config-volume") pod "2e02e25d-201c-4dc3-ba08-4308412ae8fd" (UID: "2e02e25d-201c-4dc3-ba08-4308412ae8fd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:15:02 crc kubenswrapper[4869]: I0130 22:15:02.414505 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e02e25d-201c-4dc3-ba08-4308412ae8fd-kube-api-access-b9hpc" (OuterVolumeSpecName: "kube-api-access-b9hpc") pod "2e02e25d-201c-4dc3-ba08-4308412ae8fd" (UID: "2e02e25d-201c-4dc3-ba08-4308412ae8fd"). InnerVolumeSpecName "kube-api-access-b9hpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:15:02 crc kubenswrapper[4869]: I0130 22:15:02.414651 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e02e25d-201c-4dc3-ba08-4308412ae8fd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2e02e25d-201c-4dc3-ba08-4308412ae8fd" (UID: "2e02e25d-201c-4dc3-ba08-4308412ae8fd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:15:02 crc kubenswrapper[4869]: I0130 22:15:02.511088 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e02e25d-201c-4dc3-ba08-4308412ae8fd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:15:02 crc kubenswrapper[4869]: I0130 22:15:02.511382 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9hpc\" (UniqueName: \"kubernetes.io/projected/2e02e25d-201c-4dc3-ba08-4308412ae8fd-kube-api-access-b9hpc\") on node \"crc\" DevicePath \"\"" Jan 30 22:15:02 crc kubenswrapper[4869]: I0130 22:15:02.511397 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e02e25d-201c-4dc3-ba08-4308412ae8fd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:15:03 crc kubenswrapper[4869]: I0130 22:15:03.034163 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97" event={"ID":"2e02e25d-201c-4dc3-ba08-4308412ae8fd","Type":"ContainerDied","Data":"cdc589f6ab9674655b1712c8f6448329152a2620b69ba473c8f9b4f60dcd3a87"} Jan 30 22:15:03 crc kubenswrapper[4869]: I0130 22:15:03.034212 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-v6z97" Jan 30 22:15:03 crc kubenswrapper[4869]: I0130 22:15:03.034242 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdc589f6ab9674655b1712c8f6448329152a2620b69ba473c8f9b4f60dcd3a87" Jan 30 22:15:05 crc kubenswrapper[4869]: I0130 22:15:05.876655 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:15:05 crc kubenswrapper[4869]: E0130 22:15:05.877201 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:15:19 crc kubenswrapper[4869]: I0130 22:15:19.879142 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:15:19 crc kubenswrapper[4869]: E0130 22:15:19.879765 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:15:34 crc kubenswrapper[4869]: I0130 22:15:34.876640 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:15:34 crc kubenswrapper[4869]: E0130 22:15:34.877375 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:15:49 crc kubenswrapper[4869]: I0130 22:15:49.880047 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:15:49 crc kubenswrapper[4869]: E0130 22:15:49.880632 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:16:02 crc kubenswrapper[4869]: I0130 22:16:02.877501 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:16:02 crc kubenswrapper[4869]: E0130 22:16:02.878713 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500" Jan 30 22:16:14 crc kubenswrapper[4869]: I0130 22:16:14.877077 4869 scope.go:117] "RemoveContainer" containerID="37739f8530d75ca7514f9e5e898fcdf553eb88295e259a8f463f2bada9b2701f" Jan 30 22:16:14 crc kubenswrapper[4869]: E0130 22:16:14.877864 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vzgdv_openshift-machine-config-operator(b6fc0664-5e80-440d-a6e8-4189cdf5c500)\"" pod="openshift-machine-config-operator/machine-config-daemon-vzgdv" podUID="b6fc0664-5e80-440d-a6e8-4189cdf5c500"